Set big table KEEP cache

Hello,
Version 10204 on linux redhat 4.
In my data warehouse i have a table which it size size 5GB.
This the most important table , and most of the queries are joining this table.
Currently i have 32GB of memory in my machine.
The SGA+PGA = 17GB
Free memory in the machine is about 15GB:
NAME                                 TYPE        VALUE
sga_max_size                       big integer 12G
sga_target                          big integer 12G
pga_aggregate_target          big integer 5GEach night this table is being TRUNCATED and populated again with new data.
I am thinking of setting this table in the KEEP cache.
I would like to get your feedback if you think its the right thing to do.
Thanks

Hi,
There is a best practice about this configuration (from oracle documentation):
"A good candidate for a segment to put into the KEEP pool is a segment that is smaller than 10% of the size of the DEFAULT buffer pool and has incurred at least 1% of the total I/Os in the system."
Hope this helps,
Cheers.
Cuneyt

Similar Messages

  • Performance question - Caching data of a big table

    Hi All,
    I have a general question about caching, I am using an Oracle 11g R2 database.
    I have a big table about 50 millions of rows that is accessed very often by my application. Some query runs slow and some are ok. But (obviously) when the data of this table are already in the cache (so basically when a user requests the same thing twice or many times) it runs very quickly.
    Does somebody has any recommendations about caching the data / table of this size ?
    Many thanks.

    Chiwatel wrote:
    With better formatting (I hope), sorry I am not used to the new forum !
    Plan hash value: 2501344126
    | Id  | Operation                            | Name          | Starts | E-Rows |E-Bytes| Cost (%CPU)| Pstart| Pstop | A-Rows |  A-Time  | Buffers | Reads  |  OMem |  1Mem | Used-Mem |
    |  0 | SELECT STATEMENT        |                    |      1 |        |      |  7232 (100)|      |      |  68539 |00:14:20.06 |    212K|  87545 |      |      |          |
    |  1 |  SORT ORDER BY                      |                |      1 |  7107 |  624K|  7232  (1)|      |      |  68539 |00:14:20.06 |    212K|  87545 |  3242K|  792K| 2881K (0)|
      2 |  NESTED LOOPS                      |                |      1 |        |      |            |      |      |  68539 |00:14:19.26 |    212K|  87545 |      |      |          |
    |  3 |    NESTED LOOPS                      |                |      1 |  7107 |  624K|  7230  (1)|      |      |  70492 |00:07:09.08 |    141K|  43779 |      |      |          |
    *  4 |    INDEX RANGE SCAN                | CM_MAINT_PK_ID |      1 |  7107 |  284K|    59  (0)|      |      |  70492 |00:00:04.90 |    496 |    453 |      |      |          |
    |  5 |    PARTITION RANGE ITERATOR        |                |  70492 |      1 |      |    1  (0)|  KEY |  KEY |  70492 |00:07:03.32 |    141K|  43326 |      |      |          |
    |*  6 |      INDEX UNIQUE SCAN              | D1T400P0      |  70492 |      1 |      |    1  (0)|  KEY |  KEY |  70492 |00:07:01.71 |    141K|  43326 |      |      |          |
    |*  7 |    TABLE ACCESS BY GLOBAL INDEX ROWID| D1_DVC_EVT    |  70492 |      1 |    49 |    2  (0)| ROWID | ROWID |  68539 |00:07:09.17 |  70656 |  43766 |      |      |          |
    Predicate Information (identified by operation id):
      4 - access("ERO"."MAINT_OBJ_CD"='D1-DEVICE' AND "ERO"."PK_VALUE1"='461089508922')
      6 - access("ERO"."DVC_EVT_ID"="E"."DVC_EVT_ID")
      7 - filter(("E"."DVC_EVT_TYPE_CD"='END-GSMLOWLEVEL-EXCP-SEV-1' OR "E"."DVC_EVT_TYPE_CD"='STR-GSMLOWLEVEL-EXCP-SEV-1'))
    Your user has executed a query to return 68,000 rows - what type of user is it, a human being cannot possibly cope with that much data and it's not entirely surprising that it might take quite some time to return it.
    One thing I'd check is whether you're always getting the same execution plan - Oracle's estimates here are out by a factor of about 95 (7,100 rows predicted vs. 68,500 returned) perhaps some of your variation in timing relates to plan changes.
    If you check the figures you'll see about half your time came from probing the unique index, and half came from visiting the table. In general it's hard to beat Oracle's caching algorithms, but indexes are often much smaller than the tables they cover, so it's possible that your best strategy is to protect this index at the cost of the table. Rather than trying to create a KEEP cache the index, though, you MIGHT find that you get some benefit from creating a RECYCLE cache for the table, using a small percentage of the available memory - the target is to fix things so that table blocks you won't revisit don't push index blocks you will revisit from memory.
    Another detail to consider is that if you are visiting the index and table completely randomly (for 68,500 locations) it's possible that you end up re-reading blocks several times in the course of the visit. If you order the intermediate result set from the from the driving table first you may find that you're walking the index and table in order and don't have to re-read any blocks. This is something only you can know, though.  THe code would have to change to include an inline view with a no_merge and no_eliminate_oby hint.
    Regards
    Jonathan Lewis

  • HS ODBC GONE AWAY ON BIG TABLE QRY

    Hello,
    I have an HS ODBC connection set up pointing to a MySQL 5.0 database on Windows using mysql odbc 3.51.12. Oracle XE is on the same box and tnsames, sqlnet.ora, and HS ok is all set up.
    The problem is I have a huge table 100 mill rows, in MySQL, and when I run a query in Oracle SQL Developer it runs for about two minutes then I get errrors ORA-00942 lost connection, or gone away.
    I can run a query against a smaller table in the schema and it returns rows quickly. So I know the HS ODBC connection is working.
    I noticed the HS service running on Windows starts up and uses 1.5 gig of memory and the CPU time maxes to 95%, on the big table query, then the connection drops.
    Any advice on what to do here. There doesn't seem to be any config settings with HS service to limit or increase the rows, or increase the cache.
    MySQL does have some advanced ODBC driver options that I will try.
    Does anyone have any suggestions on how to handle this overloading problem??
    Thanks for the help,

    FYI, HS is Oracle Hetrogenous service to connect to non-oracle databases.
    I actually found a workaround. The table is so large the query crashes. So I broke table up with 5 MySql views, and now am able to query the views using select insert Oracle stored procedure into Oracle table.

  • How to copy a set of tables from a database to another periodically?

    We have a 4 node RAC primary database(10.2.0.2) with a physical standby(10.2.0.2) on our production site. Offlate we noticed that one of the applications(APP2) is causing heavy loads due large data downloads on the primary database servers. Our primary database has 2 schemas,
    1) one being the main schema with all objects, (USER1)
    2) and the other has views that query some set of tables from the main schema. (USER2)
    The application APP2 uses USER2 views to query and download huge data periodically. We need to be able to give accurate data results to APP2, but in the same time take off the load from the database, as APP2 is not our main application.
    We would like to know if there are any cost effective options in oracle to do this, and if so, what is the best option? Anyone has any experience setting up something like this before?
    We have thought of creating another 10.2.0.2 database on a different server and giving it regular updates(like data feeds) from the current database. The current database data changes quiet often, so the data feeds would have to be done often to keep the data current on the new database. So, we are not exactly sure how to go about it. Would a COPY command help?
    Please advice.

    user623066 wrote:
    Our 4 node RAC is already busy with our main application, which has its connections spread across all 4 nodes.
    Our main applications services are the same on all nodes and use all 4 nodes in the same way.
    There are some other utilities that we run from one of the app servers that connect to only 1 of the nodes.
    APP2 uses all 4 servers, which is again controlled by connection pooling and distributes the load.Wouldn't separate services be more beneficial here? If APP2 is locked down to one node during normal operation, that ensures that other connections aren't going to be competing for hardware with APP2 on 3 of the 4 nodes. If APP2 is generating less than 25% of the total load, you can let the other applications use whatever hardware resources are left idle on the node APP2 is locked down to.
    By Large data downloads, I meant both increase in network traffic and the CPU load on the database nodes.
    We are already using resouce manager to limit the resources allocated to USER2 that APP2 uses.
    And we have also limited the large downloads to take place in the early hours of the day when the traffic from our main application is less.
    But this has still not been optimal for the usage requirements for APP2. APP2 is also doing queries all through the day, but has a limit for the number of rows downloaded during peak hours.Can you explain a bit more about why using Resource Manager hasn't been sufficient? That's normally a pretty good way to prevent one hungry user from drastically affecting everyone else. Perhaps you just need to tweak the configuration here.
    Logical Standby seems a good option. But we need to keep our physical standby in place. Is it possible to have a logical standby and a physical standby? (ofcourse on separate servers)Sure. You can have as many standby servers of whatever type you'd like.
    Could we use a COPY command to copy data for the set of tables to a new database? Or is that also a complex option?You could, yes. COPY is a SQL*Plus command that has been depricated for copying data between Oracle databases for quite a while. It only works from SQL*Plus and would only be designed for one-time operations (i.e. there is no incremental COPY command). I can just about guarantee that's not what you want here.
    How do materialized views work? Wouldn't they still reside on the main database? Or is it possible to have remote materialized views?You probably don't want materialized views, but if you decide to go down that path
    - You'd create materialized view logs on the base tables to track changes
    - You'd create materialized views on the destination database that select data over a database link back to the source database
    - You'd put those materialized views into one or more refresh groups that are scheduled to refresh periodically
    - During a refresh, assuming incremental refreshes, the materialized view logs would be read and applied to the materialized views on the destination system to update the materialized views.
    Justin

  • Purchasing - Set Up Table in source system - LBWG

    Hi all,
    Currently we are extracting Purchasing data into BI7, and have a large volume already which has been delta'd across.
    We need to apply some notes to the Source System which involve deleting the data out of the set-up table, trnx LBWG.
    Can anyone tell me what the impact of this will have on the deltas being run in BI7...and what steps if any I would need to take?
    Thanks, Lee

    Hi Lee,
    Setting up of set up table is one time activity in general scenario.
    Once setup table data population is done , load data into BW through initialization option . If this init request gets in to status "Green" , then on you dont need to refer to data in setup table .
    As per the wish / space constraint one may keep / delete the setup table data.
    Regards
    Mr Kapadia

  • Large data sets and table partitioning : removing data

    Hi,
    I have to delete lines into a big table using partition.
    Someone says me it is more efficient for the performances to drop a whole partition (or subpartition if I use a composite partition) than to delete a line at the same time.
    He says me that data access (in my partition) should be very bad if I delete lines progressively (in this partition) instead of keeping the lines and deleting the whole partition when all its lines are no more used.
    What do you think about it?
    Thanks
    Sandrine

    Hi Sandrine,
    I agree with what you're being told. I'll be much more efficient to "clone" the data you want to keep from your patition somewhere (clone table, ...) and then drop the whole partition.
    Main thing is if you drop an object there's no BEFORE IMAGE stored in your UNDO structures (UNDO TS / RBS). SO you'll have way less disk I/Os.
    Hope this helps ^^

  • Managing a big table

    Hi All,
    I have a big table in my database. When I say big, it is related to data stored in it (around 70 million recs) and also no of columns (425).
    I do not have any problems with it now, but going ahead I assume, it would be a bottleneck or very difficult to manage this table.
    I have a star schema for the application of which this is a master table.
    Apart from partitioning the table is there any other way of better handling such a table.
    Regards

    Hi,
    Usually the fact tables tend to be smaller in number of columns and larger in number of records while the dimension tables obey to the opposite larger number of columns, which is were the powerful of the dimension lays on, and very few (in some exceptions even millions of record) records. So the high number of columns make me thing that the fact table may be, only may be, I don't have enough information, improperly designed. If that is the case then you may want to revisit that design and most likely you will find some 'facts' in your fact table that can become attributes of any of the dimension tables linked to.
    Can you say why are you adding new columns to the fact table? A fact table is created for a specific business process and if done properly there shouldn't be such a requirement of adding new columns. A fact use to be limited in the number of metrics you can take from it. In fact, it is more common the oposite, a factless fact table.
    In any case, from the point of view of handling this large table with so many columns I would say that you have to focus on avoiding the increasing number of columns. There is nothing in the database itself, such as partitioning that could do this for you. So one option is to figure out which columns you want to get 'vertical partition' and split the table in at least two new tables. The set of columns will be those that are more frequently used or those that are more critical to you.Then you will have to link these two tables together and with the rest of dimensions. But, again if you are adding new columns then is just a matter of time that you will be running in the same situation in the future.
    I am sorry but cannot offer better advice than to revisit the design of your fact table. For doing that you may want to have a look at http://www.kimballgroup.com/html/designtips.html
    LW

  • 2LIS_02_ITM - Set up Table

    All,
        Can we run a set up table job (LO datasources) (in case of purchasing program RMCENEUA) parallely in (R/3)production environment by checking the block documnet option with different document date in selection ?
        The reason for this is that if we go with this option we can schedule maximun set up jobs in parallel and complete the setup table fill up in few hours instead of running series of jobs with different document date...
        please provide your inputs
    Thanks
    Kamal

    Hi kamal,
    We can do parallel execution ,but we have to keep in mind
    that, in case of jobs that were scheduled in parallel, we
    must assign different run names by all means. If a job is
    canceled (on purpose or through an exceptional situation)
    it can only be restarted by means of the correct run name.
    With rgds,
    Anil Kumar Sharma .P

  • Very Big Table (36 Indexes, 1000000 Records)

    Hi
    I have a very big table (76 columns, 1000000 records), these 76 columns include 36 foreign key columns , each FK has an index on the table, and only one of these FK columns has a value at the same time while all other FK have NULL value. All these FK columns are of type NUMBER(20,0).
    I am facing performance problem which I want to resolve taking in consideration that this table is used with DML (Insert,Update,Delete) along with Query (Select) operations, all these operations and queries are done daily. I want to improve this table performance , and I am facing these scenarios:
    1- Replace all these 36 FK columns with 2 columns (ID, TABLE_NAME) (ID for master table ID value, and TABLE_NAME for master table name) and create only one index on these 2 columns.
    2- partition the table using its YEAR column, keep all FK columns but drop all indexes on these columns.
    3- partition the table using its YEAR column, and drop all FK columns, create (ID,TABLE_NAME) columns, and create index on (TABLE_NAME,YEAR) columns.
    Which way has more efficiency?
    Do I have to take "master-detail" relations in mind when building Forms on this table?
    Are there any other suggestions?
    I am using Oracle 8.1.7 database.
    Please Help.

    Hi everybody
    I would like to thank you for your cooperation and I will try to answer your questions, but please note that I am a developer in the first place and I am new to oracle database administration, so please forgive me if I did any mistakes.
    Q: Have you gathered statistics on the tables in your database?
    A: No I did not. And if I must do it, must I do it for all database tables or only for this big table?
    Q:Actually tracing the session with 10046 level 8 will give some clear idea on where your query is waiting.
    A: Actually I do not know what you mean by "10046 level 8".
    Q: what OS and what kind of server (hardware) are you using
    A: I am using Windows2000 Server operating system, my server has 2 Intel XEON 500MHz + 2.5GB RAM + 4 * 36GB Hard Disks(on RAID 5 controller).
    Q: how many concurrent user do you have an how many transactions per hour
    A: I have 40 concurrent users, and an average 100 transaction per hour, but the peak can goes to 1000 transaction per hour.
    Q: How fast should your queries be executed
    A: I want the queries be executed in about 10 to 15 seconds, or else every body here will complain. Please note that because of this table is highly used, there is a very good chance to 2 or more transaction to exist at the same time, one of them perform query, and the other perform DML operation. Some of these queries are used in reports, and it can be long query(ex. retrieve the summary of 50000 records).
    Q:please show use the explain plan of these queries
    A: If I understand your question, you ask me to show you the explain plan of those queries, well, first, I do not know how , an second, I think it is a big question because I can not collect all kind of queries that have been written on this table (some of them exist in server packages, and the others performed by Forms or Reports).

  • Keep cache option

    could anyone explain me if i use keep cache option for the static table, whether it'll have impact to the SGA memory?(I believe the table stored in cache buffer)

    According to Oracle document "A good candidate for a segment to put into the KEEP pool is a segment that is smaller than 10% of the size of the DEFAULT buffer pool and has incurred at least 1% of the total I/Os in the system"
    so if u load a large table to buffer cache, it will occupy many memory, which could be free for other process. In turn, insufficient buffer cache occurs, the database performance will be worsen.

  • Hisk Disk I/0 on Big table!!

    Hi,
    We have one big table in our production database 1 GB+ and most of online search done through application & reports use this table. Hence, large amount of I/O occurs & and response time is slow.
    To reduce the amount of disk reads i've moved the table into a different tablespace of 16k block size, earlier it was on 8 kb block size. I've defined a custom cache size of 150m to inorder to define the tablespace of custom block size(16k).
    Now, even after moving the table into new tablespace I don't see any difference in the reponse time or I/O?? It is same as before.
    We are working on oracle 9.2.0.7 & Linux As 4 on 32 bit . I'm not sure how to figure out whether the above scenario would work or not??
    Kindly, provide some light on it.
    Thanks
    Ratheesh

    Ratheesh,
    My statement was more of an observation than a recommendation. There is much that goes on with a disk IO, but I wanted to point out what the impact of doubling the size of the block on a single block random IO.
    In such an operation, there are electrical/magnetic operations (IO between controller and disk interface) and there are mechanical operations (positioning disk head). The mechanical operations are slower.
    Let's take a disk that has an average access time of 9ms and a sustained transfer rate of 60MB/sec. This first accounts for head positioning, rotational latency, etc and the second for how quickly it can read off disk which is a functional the rotational speed of the disk (rpms). Note that both observed numbers will vary as the characteristics are different both on the disk and given the starting state, but what is published is the average.
    For a block access, you can expect the service time to be the average access time (9ms) plus the transfer time. Given our 60M transfer rate, for 1K (1000 bytes in disk-speak) it takes 1/60,000 seconds which is 17usec. So the transfer rates for an 8k block would be 8 times that number and for 16k blocks it would be 16 times that number. In either case, we are still in the domain of usec which is noise when compared to the msec access times of most disks.
    For random IO, the majority of your performance is going to be characterized by the access time of the disk rather than by the block size of the access.
    Chris

  • Filling of set up tables for several company codes

    hi
    i am filling a set up table for the application specific inventory control statistical data. But while setting up data filling selection parameters contains selection parameters for the single Company code.
    But in my scenarios there are several company codes whose data resides in the R/3 and i need to bring data for all company codes in the set up table.
    so now filling up the data in set up table, shall i have to fill up data for individual company code, OR i have to fill data for one company only and on delta load data pertainig to other company codes will be populated to the set up tables.
    kindly give ur expert advice in this situation.

    Hello,
    i dont understand why should it be mandatory to fill the company code on r/3 side while initialisation!!
    But even if it is mandatory, u have to do one by one init runs on R/3 side for all company codes that you need.
    Check in rsa3 for records after the init runs.
    If it was not mandatory, u could have run the setup(init) run on r/3 side for all company codes at a time by keeping the field blank!!
    Also again in the BW side you can have company code for selection in the infopakage!! there u can also have it as a range rather than running for one at a time.
    Hope it is helpful now..
    regards,

  • Extract big table to a delimited file

    Hi Gurus,
    A big table of size more than 4 GB from 10g DB needed to be extracted/exported into a text file,
    the column delimiter is "&|" and row delimiter is "$#".
    I cannot do it from TOAD as it is hanging while extraction of big table.
    Any suggestion will be highly appreciated.
    Thanks in advance.

    >
    A big table of size more than 4 GB from 10g DB needed to be extracted/exported into a text file,
    the column delimiter is "&|" and row delimiter is "$#".
    I cannot do it from TOAD as it is hanging while extraction of big table.
    Any suggestion will be highly appreciated.
    >
    You will need to write your own code to do the unload.
    One possibility is to write a simple Java program and use JDBC to unload the data. This will let you unload the data to any client you run the app on.
    The other advantage of using Java for this is that you can easily ZIP the data as you unload it and use substantially less storage for the resulting file.
    See The Java Tutorials for simple examples of querying an Oracle DB and processing the result set.
    http://docs.oracle.com/javase/tutorial/jdbc/overview/index.html
    Another possibility is to use UTL_FILE. There are plenty of examples in the SQL and PL/SQL forum if you search for them.
    There is also a FAQ for 'How do I read of write an Excel file (note - this also includes delimited files).
    SQL and PL/SQL FAQ

  • How to set the table input in Query template?

    Hi all.
    I need to call a Bapi_objcl_change, with import parameter and a table as an input. I have done this, in BLS. I have set the table input in the
    form of xml. In BLS, I get the output(the value gets change in SAP R3, what i have given in BLS).  But if i set the same xml structure  in
    query template, I didn't get the output. Table input parameter does not take that xml source.  How to set the table input in Query template?
    can anyone help me?
    Regards,
    Hemalatha

    Hema,
    You probably need to XML encode the data so that it will pass properly and then xmldecode() it to set the BAPI input value.
    Sam

  • I'm downloading adobe software via my dad's account and when it's done loading, it has a message where it says it sent a verification email to my email address, but when I check it shows that it hasn't been set.  I keep clicking the resend email and it cl

    I'm downloading adobe software via my dad's account and when it's done loading, it has a message where it says it sent a verification email to my email address, but when I check it shows that it hasn't been set.  I keep clicking the resend email and it claims it does but it still hasn't sent the email to my address.  How do I fix this?

    It is probabl,y sending the email to your Dad's email address if you are using his account.

Maybe you are looking for

  • Printer driver for HP Laserjet m1132

    I've recently switched to a mac with everything updated (version 10.9.5) and I'm trying to install a driver for printer laserjet m1132. I followed each and every steps: system preference > printers and scanners > selected + > (selected the printer) 

  • How can i add a new row in System Matrix passing itemcode and quantaty

    Hi All, I have to add new lines in the matrix system only through the itemcode and item quantity. I tried several ways without success. Maybe the following code help to explain what I'm trying to do. Someone already inserted rows in the matrix system

  • Is there a way to export the Table of Content?

    We need to insert audio files to each slide. There are 127 slides in total, and we need to name each audio file using the corresponding slide's names. To do so, I think to have the Table of Content exported is a good way. Is there a way to do this? w

  • Is there a size limit for pdf files in dropbox

    pdf files will not open in Dropbox on my iPad. is it possible there too big?

  • AVI file won`t open - unsupported or damaged

    I am using Premiere CS5, and did an edit about 5 days ago with .avi files recorded with FRAPS - screen recorder, everything was fine... When I open that project now, it asks me to find the file because I changed my HDD letter, and when I find it, it