Performance impact in Oracle 8i - BLOB vs BFILE

Hi Guys,
We are evaluting intermedia to store multimedia objects.
Does any know if storing and retreiving documents in Oracle database has impact on standard data stored in the database?
Is it worth having a seperate database instance for storing tables with intermedia objects?
Pal

Part 2:
Example 1: Let us estimate the storage requirements for a data set consisting of 500 video clips comprising a total size of 250MB (average size 512K bytes). Assume a LOB chunk size of 32768 bytes. Our model estimates that we need (8000 * 32) bytes or 250 k bytes for the index and 266 MB to hold the media data. Since the original media size is 250 MB, this represents about a 6.5% storage overhead for storing the media data in the database. The following table definition could be used to store this amount of data.
create table video_items
video_id number ,
video_clip ordsys.ordvideo
-- storage parameters for table in general
tablespace video1 storage (initial 1M next 10M )
-- special storage parameters for the video content
lob (video_clip.source.localdata) store as
(tablespace video2 storage (initial 260k next 270M )
disable storage in row nocache nologging chunk 32768);
Example 2: Let us estimate the storage requirements for a data set consisting of 5000 images with an average size of 56K bytes. The total amount of media data is 274 MB. Since the average image size is smaller, it is more space efficient to choose a smaller chunk size, say 8K, to store the data in the lob. Our model estimates that we will need about 313 MB to store the data and a little over 1 MB to store the index. In this case the 40 MB of storage required beyond the raw media content size represents a 15% overhead.
Estimating retrieval costs
Performance testing has shown that Oracle can achieve equivalent and even higher throughput performance for media content retrieval than a file system. The test was configured to retrieve media data from a server system to a requesting client system. In the database case, simple C client programs used OCI with LOB read callbacks to retrieve the data from the database. For the file system case, the client program used the standard C library functions to read data from the file system. Note that in this client server configuration, files are served remotely by the file server system. In essence, we are comparing distributed file system performance with Oracle database and SQLNet performance. These tests were performed on Windows NT 4 SP5.
Although Oracle achieved higher absolute performance, the relative CPU cost per unit of throughput ranged from 1.7 to 3 times the file system cost. (For these tests, database performance ranged from 3.4 million to 9 million bytes/sec while file system performance ranged from 2.6 million bytes/sec to 7 million bytes/sec as the number of clients ranged from 1 to 5) One reason for the very high relative CPU cost at the higher end of performance is that as the 100 Mbs network approaches saturation, the system used more CPU to achieve the next increment of throughput. If we restrict ourselves to not exceeding 70% of network utilization, then the database can use up to 2.5 times as much CPU as the file system per unit of throughput.
NOTE WELL: The extra CPU cost factors pertain only to media retrieval aspect of the workload. They do not apply to the entire system workload. See example.
Example: A file based media asset system uses 10% of a single CPU simply to serve media data to requesting clients. If we were to store the media in an Oracle database and retrieve content from the database then we could expect to need 20-25% of a single CPU to serve content at the same throughput rate.

Similar Messages

  • Performance impact on oracle 11g database by audit enable

    Hi All,
    Shall we enable audit on some siebel db tables like s_party s_contacts s_order s_quote s_org_ext
    We need to see who deleted account records from oracle tables manually
    Since auditing is not enabled.
    We have given delete privelege to to all users as required by Siebel application.
    So Is this good idea to get Auditing enabled on these selected tables or Is there any performance impact on database.
    Is it good idea to enable audit for these tables espacially in siebel
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
    PL/SQL Release 11.2.0.1.0 - Production
    CORE    11.2.0.1.0      Production
    TNS for HPUX: Version 11.2.0.1.0 - Production
    NLSRTL Version 11.2.0.1.0 - Production

    Hello,
    Ok do it and generate AWR to see how the performance is getting impacted.remember auditing just some tables is not a big matter but auditing everything is the problem that is why fine grained auditing exist.please also remember to clean the audit records regularly because the auditing will be just a problem with the space in case you have many deletes which should not happen in your case.
    Kind regards
    Mohamed

  • Using BLOB or BFILE datatype

    Hi, anyone used BLOB or BFILE before?
    Currently I am thinking of using BLOB or BFILE to store documents (.doc, .pdf, .ppt, .rft, .csv). Can decide yet on which to use. Anyone got any recommendation on which to use?
    1. Any performace issue when using blob after i had stored it in a different tablespace?
    2. Will it take much longer time for export if to use blob.
    3. As for BFILE will it get corrupted if the physical files is delete or been moved to some other location?
    Thanks you in advance, you comments/advice is greatly appreciated..
    Regards,
    lbinsoon

    1. If the blob is in a different tablespace from the row it resides in, that is not a performance impact.
    2. Yes. And how much depends on how much blob data you have. This is because BFILE data is not exported, only the pointers are. So, if you have blob data, it IS exported, making for longer running exports.
    3. Yes. Well, not corrupted, but, of course, you won't be able to access the file. You must update it to point to the new name or location, which can be done with a simple update stmt thus:
    update bf set b = bfilename('d:\tmp','some_binary_file.dat')
    assuming column b is of type BFILE.
    Tom Best

  • Performance impact on displaying images when tested with loadrunner

    Hi,
    We have a page in the application that displays PDF images to the user.
    The PDF's are stored on HP-UX file system and have the BFILE path
    of the image stored along with other meta-data. The ave size of the PDF is around 250 kb.The images are displayed fine when the user wants to view the images.
    The problem we have been facing is more related to performance.
    When loadrunner tests were conducted to allow 75 users to concurrently
    view the images, the CPU utilization shoots up to 100%.
    All other resources such as memory, io are fine and the image display time is
    within acceptable limits.
    Are there any settings or configurations that can be done to bring done the CPU utilization totolerable limits. Also the box hosts other application which we fear might be impacted and henceit is very important for us to bring down the cpu utilization
    The code to display the PDF is similar to the one in sample app
    PROCEDURE DISPLAY_PDF_PRC (IMAGE_PATH_I IN BFILE)
    IS
    V_BLOB BLOB;
    V_BFILE BFILE;
    BEGIN
    V_BFILE := IMAGE_PATH_I;
    V_BLOB := EMPTY_BLOB();
    DBMS_LOB.CREATETEMPORARY(V_BLOB, TRUE);
    DBMS_LOB.FILEOPEN(V_BFILE,DBMS_LOB.FILE_READONLY);
    DBMS_LOB.LOADFROMFILE(V_BLOB,V_BFILE,DBMS_LOB.GETLENGTH(V_BFILE));
    DBMS_LOB.FILECLOSE(V_BFILE);
    OWA_UTIL.MIME_HEADER('application/pdf', FALSE);
    OWA_UTIL.HTTP_HEADER_CLOSE;
    WPG_DOCLOAD.DOWNLOAD_FILE(V_BLOB);
    DBMS_LOB.FREETEMPORARY(V_BLOB);
    EXCEPTION
    WHEN OTHERS THEN
    Htp.Prn('No image found.');
    END;
    We first thought creating the temporary blob might be costly and modified the code to use lob locator.Still the CPU utilization was over 100%.
    The next thing we tried was to eliminate the creation and usage of BLOBs altogether and directly render the images from the BFILE as mentioned in the code below and tried to use the browser caching also.
    PROCEDURE DISPLAY_PDF_PRC (IMAGE_PATH_I IN BFILE)
    IS
    V_BFILE BFILE;
    BEGIN
    V_BFILE := IMAGE_PATH_I;
    htp.p('Expires: ' || to_char(sysdate + 1/24, 'FMDy, DD Month YYYY HH24:MI:SS'));
    OWA_UTIL.MIME_HEADER('application/pdf', FALSE);
    OWA_UTIL.HTTP_HEADER_CLOSE;
    WPG_DOCLOAD.DOWNLOAD_FILE(V_BFILE);
    end;
    Still the CPU utilization is over 100%.
    So can you please point to any configurations that neeed to be done on Apache App server/DB server or any optimizations at the code level to restrict the CPU utilization.
    Thanks in Advance
    Rakesh

    Typically, you do not refer to PDFs as images. Common image formats are .jpg, .gif, .png, .bmp, etc.
    Can you store them directly on the file system and just reference their URLs instead of reading them out of the database? If so, this should all but eliminate the CPU load.
    If you only have one database on this machine, you can use the database resource scheduler to throttle the CPU utilization of the sessions downloading images. If you have more than one db, then the Resource Manager is basically useless, which is one of the main reasons to only install one db per machine.
    Another thought is to use a web cache instance in front of the HTTP Server if their are a lot of repeat views of the same PDFs. This way you cache the first view of the PDF on the web cache tier so subsequent requests don't go against the db.
    Yet another option (though not out for HP-UX yet) is the 11g "Secure Files" option. I've done some informal testing on this last week and read performance was easily 3x faster than traditional LOBs. My tests weren't very scientific as I was using VMWare on a laptop which generally has very poor physical I/O performance. They claim reads performance is comparable to the Linux file system.
    Tyler

  • Index creation online - performance impact on database

    hi,
    I have oracle 11.1.0.7 database running on Linux as 3 node RAC.
    I have a huge table which has more than 255 columns and is about 400GB in size which is also highly fragmented because of constant DML activities.
    Questions:
    1. For now i am trying to create an index Online while the business applications are running.
    Will there be any performance impact on the database to create index Online on a single column of a table 'TBL' while applications are active against the same table? So basically my question will index creation on a object during DML operations on the same object have performance impact on the database? is there a major performance impact difference in the database in creating index online and not online?
    2. I tried to build an index on a column which has NULL value on this same table 'TBL' which has more than 255 columns and is about 400GB in size highly fragmented and has about 140 million rows.
    I requested the applications to be shutdown, but the index creation with parallel of 4 a least took more than 6 hours to complete.
    We have a Pre-Prod database which has the exported and imported copy of the Prod data. So the pre-Prod is a highly de-fragmented copy of the Prod.
    When i created the same index on the same column with NULL, it only took 15 minutes to complete.
    Not sure why on a highly fragmented copy of Prod it took more than 6 hours compared to highly defragmented copy of Pre-Prod where the index creation took only 15 minutes.
    Any thoughts would be helpful.
    Thanks.
    Phil.

    How are you measuring the "fragmentation" of the table ?
    Is the pre-prod database running single instance or RAC ?
    Did you collect any workload stats (AWR / Statspack) on the pre-prod and production systems while creating (or failing to create) the index ?
    Did you check whether the index creation ended up in-memory, single pass or multi pass in in the two environments ?
    The commonest explanation for this type of difference is two-fold:
    a) the older data needs a lot of delayed block cleanout, which results in a lot of random I/O to the undo tablespace - slowing down I/O generally
    b) the newer end of the table is subject to lots of change, so needs a lot of work relating to read-consistency - which also means I/O on the undo system
      --  UPDATED:  but you did say that you had stopped the application so this bit wouldn't have been relevant.
    On top of this, an online (re)build has to lock the table briefly at the start and end of the build, and in a busy system you can wait a long time for the locks to be acquired - and if the system has been busy while the build has been going on it can take quite a long time to apply the journal file to finish the index build.
    Regards
    Jonathan Lewis

  • Performance impact using nested tables and object

    Hi,
    Iam using oracle 11g.
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package..
    Will it have any performance impact since all the data is stored in the memory.
    How can i measure the performance impact when the data grows ?
    Regards,
    Oracle User
    Edited by: user9080289 on Jun 30, 2011 6:07 AM
    Edited by: user9080289 on Jun 30, 2011 6:42 AM

    user9080289 wrote:
    While creating a package, iam using lot of nested tables created based on objects which will be passed between multiple functions in the package.. Not the best of ideas in general, in PL/SQL. This is not client code that can lay sole claim to most of the memory. It is server code and one of many server processes that need to share the available resources. So capitalism is fine on a client, but you need socialism on the server? {noformat} ;-) {noformat}
    Will it have any performance impact since all the data is stored in the memory.Interestingly yes. Usually crunching data in memory is better. In this case it may not be so. The memory used is the most expensive memory Oracle can use - the PGA. Private process memory. This means each process copy running that code, will need lots of memory.
    If you're not passing the data structures by reference, it means even bigger demands on memory as the data structure needs to be copied into the call stack and duplicated.
    The worse case scenario is that such code consumes so much free server memory, and make such huge demands on having that in pysical memory, it trashes memory management as the swap daemons are unable to keep up with the demand of swapping virtual memory pages into and out of memory. Most CPU time is spend by the swap daemons.
    I have seen servers crash due to this. I have seen a single PL/SQL process causing this.
    How can i measure the performance impact when the data grows ?Well, you need to look at the impact of your code on PGA memory. It is not SQL performance or I/O performance that is a factor - just how much private process memory your code needs in order to execute.

  • Performance Impact with OR concatenation / Inlist Iterator

    Hello guys,
    is there any performance impact with using OR concatenations or some IN-Lists?
    The function of both is the "same":
    1) Concatenation (OR-processing)
    SELECT * FROM emp WHERE mgr# = 1 OR job = ‘YOURS’;- Similar to query rewrite into 2 seperate queries
    - Which are then ‘concatenated’
    2) Inlist Iterator
    SELECT * FROM dept WHERE d# in (10,20,30);- Iteration over enumerated value-list
    - Every value executed seperately
    - Same as concatenation of 3 “OR-red” values
    So i want to know if there is any performance impact if using IN-Lists instead of OR concatenations.
    Thanks and Regards
    Stefan

    The note is very misleading and far from complete; but there is one critical point of difference that you need to observe. It's talking about using a tablescan to deal with an IN-list (and that's NOT "in-list iteration"), my comments start by saying "if there is a suitable indexed access path."
    The note, by the way, describes a transformation to a UNION ALL - clearly that would be inefficient if there were no indexed access path. (Given the choice between one tablescan and several consecutive tablescans, which option would you choose ?).
    The note, in effect, is just about a slightly more subtle version of "why isn't oracle using my index". For "shorter" lists you might get an indexed iteration, for "longer" lists you might get a tablescan.
    Remember, Metalink is not perfect; most of it is just written by ordinary people who learned about Oracle in the normal fashion.
    Quick example to demonstrate the difference between concatenation and iteration:
    drop table t1;
    create table t1 as
    select
         rownum     id,
         rownum     n1,
         rpad('x',100)     padding
    from
         all_objects
    where
         rownum <= 10000
    create index t1_i1 on t1(id);
    execute dbms_stats.gather_table_stats(user,'t1')
    set autotrace traceonly explain
    select
         /*+ use_concat(t1) */
         n1
    from
         t1
    where
         id in (10,20,30,40,50,60,70,80,90,100)
    set autotrace offThe execution plan I got from 8.1.7.4 was as follows - showing the transformation to a UNION ALL - this is concatenation and required 10 query block optimisations (which were all done three times):
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=20 Card=10 Bytes=80)
       1    0   CONCATENATION
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       4    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       5    4       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       6    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       7    6       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
       8    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
       9    8       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      10    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      11   10       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      12    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      13   12       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      14    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      15   14       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      16    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      17   16       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      18    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      19   18       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)
      20    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=2 Card=1 Bytes=8)
      21   20       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=1 Card=1)This is the execution plan I got from 9.2.0.8, which doesn't transform to the UNION ALL, and only needs to optimise one query block.
    Execution Plan
       0      SELECT STATEMENT Optimizer=ALL_ROWS (Cost=3 Card=10 Bytes=80)
       1    0   INLIST ITERATOR
       2    1     TABLE ACCESS (BY INDEX ROWID) OF 'T1' (Cost=3 Card=10 Bytes=80)
       3    2       INDEX (RANGE SCAN) OF 'T1_I1' (NON-UNIQUE) (Cost=2 Card=10)Regards
    Jonathan Lewis
    http://jonathanlewis.wordpress.com
    http://www.jlcomp.demon.co.uk

  • Oracle.sql.BLOB.setBytes() Error

    Hi,
    I'm trying to use Java to put a large array of bytes into a BLOB table column. I'm first inserting the new row with an empty_blob() and then calling select <blob_column> ... for update and getting the oracle.sql.BLOB out of the resultset. I then try to call setBytes() on this BLOB and I get the following exception:
    java.sql.SQLException: Invalid argument(s) in call: putBytes()
    at oracle.jdbc.driver.T4CConnection.putBytes(T4CConnection.java:2440)
    at oracle.sql.BLOB.setBytes(BLOB.java:916)
    I'm using Oracle XE (oracle-xe-11.2.0-1.0.x86_64.rpm), the latest ojdbc6.jar, and jboss 4.2.2 on CentOS.
    A code snippet of what I'm doing:
    ... stmt = con.prepareStatement("select blob_column from blob_table where id=? for update"); stmt.setLong(1, Id); ResultSet rs = stmt.executeQuery(); try {     if (rs.next()) {         WrappedResultSet wrappedRs = (WrappedResultSet)rs;         BLOB oracleBlob = ((OracleResultSet)wrappedRs.getUnderlyingResultSet()).getBLOB(1);         if(oracleBlob != null) {             byte[] bytes = getData();             int pos = 0;             long bytesLeft = bytes.length;             log.debug("Attempting to write " + bytes.length + " bytes to BLOB");             while(bytesLeft > 0) {                 int bytesWritten = oracleBlob.setBytes(pos, bytes, pos, MAXBUFSIZE);                 log.debug("Wrote " + bytesWritten + " bytes to BLOB");                 bytesLeft -= bytesWritten;                 pos += bytesWritten;             }         }     } } finally {     rs.close(); } ...
    Any help would be greatly appreciated!

    Welcome to the forum!
    Thanks for posting the code and the DB, JDBC and app server versions. Those are what is needed to help.
    >
    java.sql.SQLException: Invalid argument(s) in call: putBytes()
                while(bytesLeft > 0) {
                    int bytesWritten = oracleBlob.setBytes(pos, bytes, pos, MAXBUFSIZE);
                    log.debug("Wrote " + bytesWritten + " bytes to BLOB");
                    bytesLeft -= bytesWritten;
                    pos += bytesWritten;That 'Invalid argument . . .' was your clue to look at the ARGUMENT values you are passing to the method call. You could have easily done that by displaying the values to the console each time in the loop BEFORE the method call.
    This is the signature of that method in the Javadocs (edited to highlight the relevant parts):
    http://docs.oracle.com/javase/6/docs/api/java/sql/Blob.html#setBytes(long, byte[])
    >
    int setBytes(long pos, byte[] bytes, int offset, int len) throws SQLException
    Writing starts at position pos in the BLOB value; len bytes from the given byte array are written.
    Parameters:
    pos - the position in the BLOB object at which to start writing; the first position is 1
    bytes - the array of bytes to be written to this BLOB object
    offset - the offset into the array bytes at which to start reading the bytes to be set
    len - the number of bytes to be written to the BLOB value from the array of bytes bytes
    >
    This is what you are passing for 'len': MAXBUFSIZE
    Most likely that value is LARGER than the 'byte array' that you are using; perhaps it is even the MAX int size.
    That value is invalid.
    In addition each time thru the loop you increment 'pos' and use 'pos' as the 'offset' into your array. Then you once again use MAXBUFSIZE as the number of bytes to write from your array. Even if MAXBUFSIZE is less than the length of your array at some point it will likely be greater than what is left of the array.
    For example, if MAXBUFSIZE is 2 and your array length is 3 the first 'put' will put the first two bytes. Then the second put will use a 'pos' of 2 and try to put 2 more bytes; except there is only one byte left.
    Your code updates the entire BLOB value. Best practices are to use the stream methods for reading and writing BLOB/CLOB rather than the 'setBytes' method you are using. The main reason for this is performance: the stream methods write DIRECTLY to the database.
    >
    Notes:
    The stream write methods described in this section write directly to the database when you write to the output stream. You do not need to run an UPDATE to write the data. However, you need to call close or flush to ensure all changes are written. CLOBs and BLOBs are transaction controlled. After writing to either, you must commit the transaction for the changes to be permanent.
    >
    See 'Reading and Writing BLOB and CLOB Data' in the JDBC Dev Guide
    http://docs.oracle.com/cd/B19306_01/java.102/b14355/oralob.htm#i1058044
    >
    Example: Writing BLOB Data
    Use the setBinaryOutputStream method of an oracle.sql.BLOB object to write BLOB data.
    The following example reads a vector of data into a byte array, then uses the setBinaryOutputStream method to write an array of character data to a BLOB.
    java.io.OutputStream outstream;
    // read data into a byte array
    byte[] data = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
    // write the array of binary data to a BLOB
    outstream = ((BLOB)my_blob).setBinaryOutputStream(1L);
    outstream.write(data);

  • Isolation level and performance impact?

    Hi
    I'm new to BDB JE and building some prototypes to evaluate it.
    Given a simple usecase of storing the following key/value pair <String,List<Event>> mapping a user to his/her list of events, in the db. New events are added for the user, this happens (although fairly rarely) concurrently.
    Using Serializable isolation will prevent any corruption to the list of events, since the events are effectively added serially to the user. I was wondering:
    1. if there are any lesser levels of isolation that would still be adequate
    2. using Serializable isolation, is there a performance impact on updating users non concurrently (ie there's no lock contention since for the majority of cases concurrent updates won't happen) vs the default isolation level?
    3. building on 2. is there performance impact (other than obtaining and releasing locks) on using transactions with X isolation during updates of existing entries if there are no lock contention (ie, no concurrent updates) vs not using transactions at all?
    Thanks!
    Peter

    Have you seen this section of the Getting Started Guide on isolation levels in JE? http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html
    Our default is Repeatable Read, and that could be sufficient for your application depending on your access patterns, and the semantic sense of the items in your list. I think you're saying that the data portion of a record is the list of events itself. With RepeatableRead, you'll always see only committed data, and retrieving that record from a JE database will always return a consistent view of a given list. See http://www.oracle.com/technology/documentation/berkeley-db/je/TransactionGettingStarted/isolation.html#serializable for an explanation of what additional guarantee you get with Serializable.
    2. using Serializable isolation, is there a
    performance impact on updating users non concurrently
    (ie there's no lock contention since for the majority
    of cases concurrent updates won't happen) vs the
    default isolation level?Yes, there is an additional cost. When using Serializable isolation, additional locks are taken on adjacent data records. In addition to the cost of acquiring the lock (which would be low in a non-contention case), there may be additional I/O needed to fetch adjacent data records.
    3. building on 2. is there performance impact (other
    than obtaining and releasing locks) on using
    transactions with X isolation during updates of
    existing entries if there are no lock contention (ie,
    no concurrent updates) vs not using transactions at
    all? In (2) we compared the cost of Serializable to RepeatableRead. In (3), we're comparing the cost of non-transactional access to the default Repeatable Read transaction.
    Non-transactional is always a bit cheaper, even if there is no lock contention. On top of the cost of acquiring the locks, non-transactional operations use less memory and disk space, and execute some transaction setup and teardown code. If there are concurrent operations, even in there is no contention on a given lock, there could be some stress on the lock table latches and transaction tables. That said, if your application is I/O bound, the cpu differences between non-txnal and txnal operations becomes more of a secondary factor. If you're I/O bound, the memory and disk space overhead does matter, because the cache is more efficiently used with non-txnal operations.
    Regards,
    Linda
    >
    Thanks!
    Peter

  • Performance Impact - Boost in RefinmentMenu Cartridge

    Hi,
         Please find below the business requirement and approach used
    Size facet is driven from Endeca Experience Manager using RefinementMenu cartridge, the website contains around 300 unique sizes.
    We have a requirement where business user should have the flexibility to reorder the size facet values. We are achieving this using the Boost editor within the RefinementMenu cartridge.
    We need information about the performance impact in using boost of RefinementMenu cartridge considering my size facet would contain around 300 sizes.

    If this is an Oracle Portal question, please repost to the
    appropriate Oracle Portal discussion forum.

  • Table has 80 million records - Performance impact if we stop archiving

    HI All,
    I have a table (Oracle 11g) which has around 80 million records till now we used to do weekly archiving to maintain size. But now one of the architect at my firm suggested that oracle does not have any problem with maintaining even billion of records with just a few performance tuning.
    I was just wondering is it true and moreover what kind of effect would be their on querying and insertion if table size is 80 million and increasing every day ?
    Any comments welcomed.

    What is true is that Oracle database can manage tables with billions of rows but when talking about data size you should give table size instead of number of rows because you wont't have the same table size if the average row size is 50 bytes or if the average row size is 5K.
    About performance impact, it depends on the queries that access this table: the more data queries need to process and/or to return as result set, the more this can have an impact on performance for these queries.
    You don't give enough input to give a good answer. Ideally you should give DDL statements to create this table and its indexes and SQL queries that are using these tables.
    In some cases using table partitioning can really help: but this is not always true (and you can only use partitioning with Entreprise Edition and additional licensing).
    Please read http://docs.oracle.com/cd/E11882_01/server.112/e25789/schemaob.htm#CNCPT112 .

  • Will there be any performance impact

    Hi All,
        Currently i'm having table employee with 1 millon records.. (emp ID is primary key). In process , i want to insert new employee ID and use for program and deleting it finally(simplyfing changes in current program).. every day this will take 100K trancations.
       I'm planning to commit only after delete. (ie insert -> make some update --> delete the same row --> commit).
    Will this emp IDs added to index memory  and give performance impact though i'm commiting the transaction after deleting the rows?
    database : oracle 10g.
    Thanks!!!

    If I understand you correctly, this sounds like a use case for a global temporary table (with the same structure as your employee table).
    As you insert, update and delete the same row within one single transaction (for the convenience of your code I assume), those row will only ever be visible to the session that (temporarily) inserts them into the table.
    The design you are suggesting has (at least) the following performance impact:
    1) it will inhibit concurrency
         - other sessions reading the table while transient rows are inserted and are being updated may have to clone some data buffers and apply UNDO to get read consistent clones of the buffers being modified.
         - you may cause buffer busy wait events as you modify the blocks belonging to your employee table while other sessions want to read the blocks affected by these modifications (the severity of this depends on how your 100K transactions are spread throughout the day and what activity runs on the database in parallel).
         - you will increase activity on the hash chain latches protecting the buffers of your employee table (the same applies to the severity as for the previous point).
    2) You increase the amount of REDO generated by your code. Using a global temporary table your 100K transactions will also generate some REDO, but significantly less.
    3) Using the global temporary table approach you don't need to delete the rows once you are done with your processing - you simply define your global temporary table as "ON COMMIT DELETE ROWS".
    4) You'll have to do all the work associated with the index maintenance to insert and delete the corresponding index entry (see my post from  Jun 24, 2013 8:16 PM)

  • Rec/client parameter & Performance Impact

    Hi all,
    We have been asked by our audit team to set the parameter Rec/client=300 (our production client) in our production system.
    But when we read few forums & notes, we feel that setting this parameter Rec/client=300 will definetly impact the performance.
    So before setting this parameters we need to evaluate the  performance impact in our ECC 6.0 system by setting this parameter, can anyone help us to evaluate the performace impact of this parameter settings in our below system
    - ECC 6.0 / oracle 10g / HPUX - 64 bit / 3500 users / 2.5 terabyte data
    Thanks
    Senthil

    Hello
    Normally only customizing tables should have the logging flag. You can verify it in SE13 -> <table> -> Log data changes.
    To list all tables with logging on, you can use this select statement:
    SQL> select tabname, protokoll from sapr3.dd09l where protokoll = 'X';
    As long as only so called customizing tables are logged, you should be fine. If you have some heavy traffic Z* tables, then two things might happen:
    - performance might suffer
    - the logging table DBTABLOG will explode
    So please make sure only the necessary tables are logged and if possible test on the QAS system, if the logging leads to performance problems.
    Best regards
    Michael

  • Oracle Linux Patching/Updates Impact on Oracle Database

    I have several questions regarding Oracle Linux patching and its impact on Oracle Database.
    3 node Cluster
    Operating System: Oracle Linux Release 5 update 2
    Kernel: 2.6.18-92.1.22.0.1.el5 x86_64 GNU/Linux
    Grid Infrastructure: Oracle Grid Infrastructure 11g Release 2 (11.2.0.2)
    Database: Oracle Database 11g Release 2 Enterprise Edition (11.2.0.2) with RAC option
    ASM with ASMLib is being used for Data and FRA.
    *1)* Given your at Oracle Linux Release 5 update 2, will applying only Security Patches require any changes to ASMLib, Oracle Grid Infrastructure and/or Oracle Database (software binaries, configuration etc.)?
    *2)* Given your at Oracle Linux Release 5 update 2, will applying Bug Fixes and Security Patches to Oracle Linux 5 update 2 require any changes to ASMLib, Oracle Grid Infrastructure and/or Oracle Database (software binaries, configuration etc.)?
    *3)* Given your updating from Oracle Linux Release 5 update 2 to Oracle Linux Release 5 update 8, will this require any changes to ASMLib, Oracle Grid Infrastructure and/or Oracle Database (software binaries, configuration etc.)?
    I would most whole heartedly appreciate if someone could answer the specific questions asked, inline if possible. Our System Administrator is preparing to perform some patching and as the DBA I'd like to be certain what changes I need to prepare for? I also have a single instance database setup with Oracle Restart. I don't know if this impacts the answers above.
    Thanks in advance.

    How do you plan to apply Oracle Linux security patches? As far as I'm aware the yum security plug-in does not work with the Oracle public yum repository and there is no really feasible way to automate and determine security related patches only, unless you setup Oracle Enterprise Manager Grid Control (11g) or Cloud (12c)  and have a subscription to Oracle ULN.
    If you upgrade your Oracle Linux to the latest available version using the public yum repository, all patches, including errata will be applied. You cannot apply patches and stay on a specific release version unless you have a subscription and appropriate yum repository access.
    As far as I'm aware there are no changes required to your Oracle database or ASM setup and configuration when updating the OS, provided the version of the Oracle database is supported for the specific OS version (ASM features like ADVM/ACFS are kernel version depending). Upgrading from one major release to another, such as upgrading Oracle Linux 5 to Oracle Linux 6 is not supported and will require upgrading the Oracle Database to 11.2.0.3. You can check the Oracle database certification matrix to determine which Oracle database release version is required for which OS version http://docs.oracle.com/cd/E11882_01/relnotes.112/e23558/toc.htm
    The relinking of Oracle Database binaries is a common practice when updating the OS. I do not think it is necessary under Linux. Oracle binaries are typically standalone applications, or use Oracle provided libraries, or in some cases use OS shared libraries, which does not require relinking of Oracle binaries. Upgrading the Oracle database software of course requires relinking.
    Edited by: Dude on Dec 4, 2012 2:09 PM

  • Performance issues with Oracle EE 9.2.0.4 and RedHat 2.1

    Hello,
    I am having some serious performance issues with Oracle Enterprise Edition 9.2.0.4 and RedHat Linux 2.1. The processor goes berserk at 100% for long (some 5 min.) periods of time, and all the ram memory gets used.
    Some environment characteristics:
    Machine: Intel Pentium IV 2.0GHz with 1GB of RAM.
    OS: RedHat Linux 2.1 Enterprise.
    Oracle: Oracle Enterprise Edition 9.2.0.4
    Application: We have a small web-application with 10 users (for now) and very basic queries (all in stored procedures). Also we use the latest version of ODP.NET with default connection settings (some low pooling, etc).
    Does anyone know what could be going on?
    Is anybody else having this similar behavior?
    We change from SQL-Server so we are not the world expert on the matter. But we want a reliable system nonetheless.
    Please help us out, gives some tips, tricks, or guides…
    Thanks to all,
    Frank

    Thank you very much and sorry I couldn’t write sooner. It seems that the administrator doesn’t see the kswap going on so much, so I don’t really know what is going on.
    We are looking at some queries and some indexing but this is nuts, if I had some poor queries, which we don’t really, the server would show pick right?
    But he goes crazy and has two oracle processes taking all the resources. There seems to be little swapping going on.
    Son now what? They are all ready talking about MS-SQL please help me out here, this is crazy!!!
    We have, may be the most powerful combinations here. What is oracle doing?
    We even kill the Working Process of the IIS and have no one do anything with the database and still dose two processes going on.
    Can some one help me?
    Thanks,
    Frank

Maybe you are looking for