Berkelydb is taking time to come up.

We have Berkelydb with 56 GB of data. When ever we bring up the server, Berkely DB is taking so much time to come up. I am getting first response after 15 minutes.It is taking 15 minutes time to respond which is much higher.
Is there a way we can increase the retrieve time of the first request? Please share your ideas.

Hello,
From your description it sounds like pre-loading the data cache
should improve performance at application startup by avoiding
random disk access. If you have access to the, "My Oracle Support",
system, Note 781726.1 discusses this topic.
Some suggestions are:
1. If the data fits in RAM:
- Open a cursor on the first record and iterate over the database
to load all database pages.
- Use the memory pool api to get each page.
2. Other options:
- Use the db_dump/db_load utilities to dump and reload the database.
- On UNIX/Linux systems, the file-system cache can be warmed by using
the "dd"/"cat" utilities over the database files. When the application
it started the Berkeley DB cache should be populated more quickly because
the data is already in the kernel page cache.
3. For data-sets larger than RAM with working-sets that do fit into the
cache, identify the working-sets and bring these into the cache.
Related documentation can be found at:
Selecting a Cache Size:
http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/general_am_conf.html#am_conf_cachesize
Flushing the Database Cache:
http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/am_sync.html
Retrieving Records with a Cursor:
http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/am_cursor.html#am_curget
The Memory Pool Subsystem:
http://download.oracle.com/docs/cd/E17076_01/html/programmer_reference/mp.html
Thanks,
Sandra

Similar Messages

  • Transfer of XML through a firewall is taking time

    Hi,
    I have following scenario:
    Publisher BPEL --> || Firewall || Subscriber BPEL
    Now the XML is being transferred over the firewall is around 9MB.
    When these two BPEL processes are deployed on local intranet, the transfer of XML is too fast and Subscriber BPEL completes within seconds. But when above two BPEL are deployed over the internet with firewall in-between, XML transfer is taking time of about 15 Minutes.
    Is there a solution to reduce the XML transfer time or some work-around to handle such issue?
    Thanks,
    Bhavnesh.

    If it is a 9mb file this is quite large, I think this will come down to latency and bandwidwth. Test form your environment how long does it take to download a 9mb, then from the remote location. If this takes around 15 mins to perform then there is little you can do but get a better ISP.
    I doubt the issue will be with the firewall.
    cheers
    James

  • PartitionOn taking time

    Hi All,
    We have weekly maintainence for essbase cube. There are total 2 source cubes and target cubes for the reporting purposes. The target cubes are accessed through the
    partitions. During weekend maintainence process we load the data into the cube and in the end we run the Partitionon.bat script to create partition betweeb the cubes
    here the problem comes "the script is common for all the countries but for some countries the script excutes in 20 min and for some countries it takes nearly 1hr even
    though the cube size is very less when compared with the countries which execute first".
    Thanks in advance.

    Hi MattRollings ,
    As i said we are having two types of partitions one is for statitic reports and the another for the dyanmic reporting. yes the script which we use to for partitionon will create the partitions. The Script is common for all the countries and the script runs like this
    1.First it creates the Transparent partition.
    2.Then to create the Replica it refreshes from the two source cubes after that aggregation happens asually.
    The main problem comes here during the refresh from the source to the Target its taking time. To say exactly during the refresh from source to the replica its taking 15min. For the other countries its taking only 8 min of time.
    I have gone ahead to compare two different countries replica cubes size what i found is that one country cube size is 2Gb and the another one is 2.5Gb the one which is having less size taking 20min to complete and the other one takes 1hr time. Even though the second one is .5GB extra when compared with the first it should take 25min to create the replica partition.
    what might be reason to consume more time while creating replica partition? Is it dependent on Defined Blocks, Actual Blocks,Non missing leaf blocks etc?
    Thanks Matt in advance.

  • Customized login page taking time to load

    Dear Experts,
    Request you to kindly suggest how can I tune the Customized login Page.
    as it is taking time to load.
    Warm Regards
    Upendra Agrawal.

    Hi,
    Thanks for your quick reply, Changes which I made are only in the LogonTopArea.jsp and in LogonBottomArea.jsp,
    i.e i have added a flash and images, earlier the total file size of the
    com.sap.portal.runtime.logon.par file used to be around 314 kb but now it is around 800kb.
    Other than that, nothing has changed,
    Request you to kindly suggest.
    Thanks & Regards
    Upendra Agrawal

  • I installed mountain lion over snow leopard and my macbook pro 13" taking time for login and logout,

    i installed mountain lion over snow leopard and my macbook pro 13" taking time for login and logout.. any solution

    Hi JoeyR.  Well, according to this link at the Apple Store, OS X Moutain Lion became available in July and I downloaded it for $19.99.  I figured I would do that before renewing my Norton security SW.  Are we talking about the same thing?
    http://www.apple.com/osx/

  • Publishing and Overwriting a Universe, Updates new measures quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated.

    Hi gurus:
    I have a continuous problem while Publishing and Overwriting a Universe from IDT, new measures UPDATES quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated. it approximately takes half an hour to be updated while accessing universe from dashboards.
    Regards:
    Jawad Khalid

    Hi gurus:
    I have a continuous problem while Publishing and Overwriting a Universe from IDT, new measures UPDATES quickly in Business Explorer / Information Spaces but taking time in dashboard to be updated. it approximately takes half an hour to be updated while accessing universe from dashboards.
    Regards:
    Jawad Khalid

  • In one website, it takes to much time to load the page, in other site its not taking time, so do i need to enable or change any settings

    in one website, its taking time to load the page, on other PC its not taking any time( with internet explorer) in my PC other websites are opening quickly but this website takes too much time with firefox

    Zepo wrote:
    My iMac has been overwhelmed almost since I bought it new.  After some digging the guiness bar suggested its my Aperture library being on the same
    internal Tera byte drive as my operating system.
    Having a single internal hard drive overfilled (drives slow as they fill) is very likely contributing to your problems, but IMO "my Aperture library being on the same internal Tera byte drive as my operating system" is very unlikely to be contributing to your problems. In fact the Library should stay on an underfilled (roughly, for speed, I would call ~half full "underfilled") internal drive, not on the Drobo.
    Instead build a Referenced-Masters workflow with the Library and OS on an internal drive, Masters on the Drobo, OS 10.6.8 (there have been issues reported with OS 10.7 Lion). Keep Vault backup of the Library on the Drobo, and of course back up all Drobo data off site.
    No matter what you do with i/o your C2D Mac is not a strong box for Aperture performance. If you want to really rock Aperture move to one of the better 2011 Sandy Bridge Macs, install 8 GB or more of RAM and build a Referenced-Masters workflow with the Library and OS on an internal solid state drive (SSD).
    Personally I would prefer investing in a Thunderbolt RAID rather than in a Drobo but each individual makes his/her own network speed/cost decisions. The Drobo should work OK for referenced Masters even though i/o is limited by the Firewire connection.
    Do not forget the need for off site backup. And I suggest that in the process of moving to your new setup it is most important to get the data safely and redundantly copied and generally best to disregard how long it may take.
    HTH
    -Allen Wicks

  • Prcedure taking time

    Hello Experts,
    I am running a procedure which is taking too much time during execution. How can I found that which statement or query in the procedure is taking time. Is there any package available to check this out.
    Thanks & Regards,
    Manish

    Execute the below query to find out the session id of the session in which your procedure is getting executed.
    [c]
    SELECT s.sid
    ,s.serial#
    ,ddl.name
    ,ddl.type
    ,ddl.owner
    ,s.status
    ,s.osuser
    ,s.machine
    FROM dba_ddl_locks ddl
    ,v$session s
    WHERE ddl.session_id = s.sid;
    [\c]
    Once you get the sid,execute the below query to know which DML its executing:
    [c]
    SELECT /*+ leading(s) */
    s.sid
    ,s.serial#
    ,st.piece
    ,st.sql_text
    ,s.status
    ,s.osuser
    ,s.machine
    FROM v$session s
    ,v$sqltext st
    WHERE s.sql_hash_value = st.hash_value(+)
    AND s.sid = &sid_from_above_query
    ORDER BY st.piece;
    [\c]
    Regards
    Arun

  • JSP taking time to load

    Hi All
    I am having a strange problem. The JSP file is taking to much of time to load and the IE doesnot responds properly even after the the page is loaded. I agree that the JSP has significant amount of Java code in it. What could be the problem?? The response from the server is coming fast. The page is taking time after recieving the response from server.
    Regards
    Prashant

    What could be the problem??Slow hardware, slow code, perhaps some other app is eating your resources, could be anything really. I think its time for a profiler.
    The JSP file is taking to much of time to load and the IE doesnot responds properly even after the the page is loadedIE is the best browser for improper behavior, but how exactly is it misbehaving in your case? It might be a bad piece of javascript that is causing it head aches.

  • Using Materilaized view in a query .. query is taking time????

    Hi I have a query :-
    SELECT rownum as id, u.last_name, u.first_name,u.phone phone, u.empid,u.supervisor id
    FROM emp_view u -- using view
    CONNECT BY PRIOR u.empid = u.supervisor_id
    START WITH u.sbcuid = 'ph2755';
    here emp_view is a view .
    ------ The above query is taking 3 sec to execute.
    Then I created Materialuized view emp_mv and the the MV query is same as emp_view view query.
    After this i executed following sql
    SELECT rownum as id, u.last_name, u.first_name,u.phone phone, u.empid,u.supervisor id
    FROM emp_mv u -- using materialized view
    CONNECT BY PRIOR u.empid = u.supervisor_id
    START WITH u.sbcuid = 'ph2755';
    this query is taking 15 sec to execute..... :(
    can anyone please tell me why MV query is taking time????

    Hi,
    In your first case you query a view, meaning that you query the underlying tables. These probably have indexes and stats are updated.
    In you second case you query a materialized view, meaning that you query the underlying base table of that mview.
    This probably do not have the same indexes to support that query.
    But of course, I'm just guessing based on the little information provided.
    If you want to take this further, please search for "When your query takes too long" and "How to post a tuning request".
    These two threads holds valuable information, not only on how to ask this kind of question, but also how to start solving it on your own.
    Regards
    Peter

  • Gl posting is taking time

    Hi All,
    Everyday we run a gl process. Gl posting is taking more 1 hr. Same way 4 gl posting run during gl process which is causing
    performance delay.
    I see in the the log file that below stage is taking time.
    glpibr.concurrency() 02-MAR-2013 08:13:03
    glpibr.concurrency() 02-MAR-2013 08:13:03
    glpidb() 02-MAR-2013 08:46:41
    glpidb() 02-MAR-2013 08:49:59
    glpibr() 02-MAR-2013 08:49:59
    This process itself takes more than 30 hours. It varies everyday. Sometime it takes 1 hr also.
    insert into gl_balances
    ( set_of_books_id, code_combination_id, currency_code, period_name,
    actual_flag, budget_version_id, encumbrance_type_id, last_update_date,
    last_updated_by, period_type, period_year, period_num, period_net_dr, period_net_cr,
    period_to_date_adb, quarter_to_date_dr, quarter_to_date_cr, quarter_to_date_adb, year_to_date_adb,
    project_to_date_dr, project_to_date_cr, project_to_date_adb, begin_balance_dr, begin_balance_cr,
    period_net_dr_beq, period_net_cr_beq, begin_balance_dr_beq, begin_balance_cr_beq, template_id, translated_flag )
    select pi.set_of_books_id, pi.code_combination_id, pi.currency_code, pi.period_name, pi.actual_flag, pi.budget_version_id, pi.encumbrance_type_id, sysdate, :fnd_user_id, pi.period_type, pi.period_year, pi.period_num, 0, 0, NULL, 0, 0, NULL, NULL,
    0, 0, NULL, 0, 0, NULL, NULL, NULL, NULL, pi.template_id, pi.translated_flag from gl_posting_interim_50130 pi where not exists ( select 'X' from gl_balances b where b.set_of_books_id = pi.set_of_books_id and b.code_combination_id = pi.code_combination_id and b.currency_code = pi.currency_code and
    b.period_name = pi.period_name and b.actual_flag = pi.actual_flag and
    nvl(b.encumbrance_type_id, -1) = nvl(pi.encumbrance_type_id, -1) and
    nvl(b.budget_version_id, -1) = nvl(pi.budget_version_id, -1) and decode(b.translated_flag, '', '-1', 'Y', '0', 'N', '0', 'R', '1', b.translated_flag) = decode(pi.translated_flag, '', '-1', 'Y', '0', 'N', '0', 'R', '1', pi.translated_flag) )Above query is taking more than 30 mins as i can see in awrreport . As per metalink note id - 1096873.1 it is due to gl_posting_interim tables which is locking the another gl_interim_posting tables. I dont see any lock but i can see lots of gl_posting_interim present in the database. As per my knowledge this table must be dropped after posting got over.
    Env details -
    Apps version - 11.5.10.2
    DB version - 11.2.0.1
    OS - IBM AIX 6.1
    Please suggest.
    Thanks

    Please see these docs.
    R11i GLTTRN Translation Performance Issue In INSERT INTO GL_BALANCES [ID 761898.1]
    Information Center: Optimizing Performance for Oracle General Ledger [ID 1489537.2]
    Performance Issue With Translation Program After Upgrading To 10G Database [ID 742025.1]
    Deleting Summary Accounts Has Poor Performance [ID 1088585.1]
    GL Posting Performance Issue at glpip2() [ID 280641.1]
    GLPPOS Performance Issue After 9i To 10g Upgrade in gluddl.lpc [ID 1262020.1]
    Thanks,
    Hussein

  • How to know which sql query is taking time for concurrent program

       Hi sir,
    I am running concurrent program,that is taking time to execute ,i want to know which sql query causing performance
    Thanaks,
    Sreekanth

    Hi,
    My Learning: Diagnosing Oracle Applications Concurrent Programmes - 11i/R12
    How to run a Trace for a Concurrent Program? (Doc ID 415640.1)
    FAQ: Common Tracing Techniques in Oracle E-Business Applications 11i and R12 (Doc ID 296559.1)
    How To Get Level 12 Trace And FND Debug File For Concurrent Programs (Doc ID 726039.1)
    How To Trace a Concurrent Request And Generate TKPROF File (Doc ID 453527.1)
    Regards
    Yoonas

  • Select query taking time

    THe following query is taking time. Is there anyway better to write this query.
    SELECT PROGRAM_NAME_ID ,PROGRAM_NAME,sum(balance)"Unpaid Balance"
        FROM (
    SELECT DISTINCT
    PROGRAM_NAME_ID ,PROGRAM_NAME,
    t.billing_key billing_key,
    (TUFF_GENERIC_PKG.GET_TOTAL(t.billing_key,t.program_key)+
    nvl(PENALTY_INTEREST(t.billing_key,t.program_key,b.company_id,b.report_period ),0))
    -PAYMENT_AMOUNT(B.COMPANY_ID,T.PROGRAM_KEY,B.REPORT_PERIOD) Balance,
    Report_period,company_id
    FROM  BILLING B,
    PROG_SURCH T ,
    mv_program_dict P
    WHERE
    B.BILLING_KEY=T.BILLING_KEY
    AND  p.program_key= t.program_key(+)
    and company_id=:p3_hide_comp
    and b.SUBMIT_STATUS='S'
    union
    SELECT DISTINCT
    PROGRAM_NAME_ID ,PROGRAM_NAME,
    t.billing_key billing_key,
    (TUFF_GENERIC_PKG.GET_TOTAL(t.billing_key,t.program_key)+
    nvl(PENALTY_INTEREST(t.billing_key,t.program_key,b.company_id,b.report_period ),0))
    -PAYMENT_AMOUNT(B.COMPANY_ID,T.PROGRAM_KEY,B.REPORT_PERIOD) Balance,
    Report_period,company_id
    FROM  MV_BILLING B,
    MV_PROG_SURCH T ,
    mv_program_dict P
    WHERE
    B.BILLING_KEY=T.BILLING_KEY
    AND  p.program_key= t.program_key(+)
    and company_id=:p3_hide_comp
    order by report_period,program_name_id )
    where balance>=0
    GROUP BY PROGRAM_NAME_ID,PROGRAM_NAME
    ORDER BY PROGRAM_NAME_ID

    Hi,
    This is totally right.
    >
    Being one such call. The price for calling pl/sql functions in SQL can be quite high. I'd highly recommend you find a way to incorporate the pl/sql code into the SQL query.
    >
    but, try with this query. I hope would help you and return the rows you want.
    SELECT   program_name_id, program_name,
               SUM (  tuff_generic_pkg.get_total (billing_key, program_key)
                    + NVL (penalty_interest (billing_key,
                                             program_key,
                                             company_id,
                                             report_period
                           0
             - payment_amount (company_id, program_key, report_period) balance
        FROM (SELECT program_name_id, program_name, t.billing_key, t.program_key,
                     b.company_id, b.report_period
                FROM billing b, prog_surch t, mv_program_dict p
               WHERE b.billing_key = t.billing_key
                 AND p.program_key = t.program_key(+)
                 AND company_id = :p3_hide_comp
                 AND b.submit_status = 'S'
              UNION
              SELECT program_name_id, program_name, t.billing_key, t.program_key,
                     b.company_id, b.report_period report_period, company_id
                FROM mv_billing b, mv_prog_surch t, mv_program_dict p
               WHERE b.billing_key = t.billing_key
                 AND p.program_key = t.program_key(+)
                 AND company_id = :p3_hide_comp) sub
       WHERE   (  tuff_generic_pkg.get_total (billing_key, program_key)
                + NVL (penalty_interest (billing_key,
                                         program_key,
                                         company_id,
                                         report_period
                       0
             - payment_amount (company_id, program_key, report_period) >= 0
    GROUP BY program_name_id, program_nameObviosly I cannot testing.
    HTH -- johnxjean --

  • Insert statement taking time on oracle 10g

    Hi,
    My procedure taking time in following statement while database upgrading from oracle 9i to oracle 10g.
    I m using oracle version 10.2.0.4.0.
    cust_item is matiralize view in procedure and it is refreshing in the procedure
    Index is dropping before inserting data into cust_item_tbl TABLE and after inserting data index is created.
    There are almost 6 lac records into MV which are going to insert into TABLE.
    In 9i below insert statement is taking 1 hr time to insert while in 10g it is taking 2.30 hrs.
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL QUERY';
    EXECUTE IMMEDIATE 'ALTER SESSION ENABLE PARALLEL DML';
    INSERT INTO /*+ APPEND PARALLEL */ cust_item_tbl  NOLOGGING
             (SELECT /*+ PARALLEL */
                     ctry_code, co_code, srce_loc_nbr, srce_loc_type_code,
                     cust_nbr, item_nbr, lu_eff_dt,
                     0, 0, 0, lu_end_dt,
                     bus_seg_code, 0, rt_nbr, 0, '', 0, '', SYSDATE, '', SYSDATE,
                     '', 0, ' ',
                                   case
                                 when cust_nbr in (select distinct cust_nbr from aml.log_t where CTRY_CODE = p_country_code and co_code = p_company_code)
                                 THEN
                                         case
                                            when trunc(sysdate) NOT BETWEEN trunc(lu_eff_dt) AND trunc(lu_end_dt)
                                            then NVL((select cases_per_pallet from cust_item c where c.ctry_code = a.ctry_code and c.co_code = a.co_code
                                                          and c.cust_nbr = a.cust_nbr and c.GTIN_CO_PREFX = a.GTIN_CO_PREFX and c.GTIN_ITEM_REF_NBR = a.GTIN_ITEM_REF_NBR
                                                          and c.GTIN_CK_DIGIT = a.GTIN_CK_DIGIT and trunc(sysdate) BETWEEN trunc(c.lu_eff_dt) AND trunc(c.lu_end_dt) and rownum = 1),
                                                          a.cases_per_pallet)
                                      else cases_per_pallet
                                  end
                          else cases_per_pallet
                     END cases_per_pallet,
                     cases_per_layer
                FROM cust_item a
               WHERE a.ctry_code = p_country_code ----varible passing by procedure
                 AND a.co_code = p_company_code   ----varible passing by procedure
                 AND a.ROWID =
                        (SELECT MAX (b.ROWID)
                           FROM cust_item b
                          WHERE b.ctry_code = a.ctry_code
                            AND b.co_code = a.co_code
                            AND b.ctry_code = p_country_code ----varible passing by procedure
                            AND b.co_code = p_company_code   ----varible passing by procedure
                            AND b.srce_loc_nbr = a.srce_loc_nbr
                            AND b.srce_loc_type_code = a.srce_loc_type_code
                            AND b.cust_nbr = a.cust_nbr
                            AND b.item_nbr = a.item_nbr
                            AND b.lu_eff_dt = a.lu_eff_dt));explain plan of oracle 10g
    Plan
    INSERT STATEMENT  CHOOSECost: 133,310  Bytes: 248  Cardinality: 1                      
         5 FILTER                 
              4 HASH GROUP BY  Cost: 133,310  Bytes: 248  Cardinality: 1            
                   3 HASH JOIN  Cost: 132,424  Bytes: 1,273,090,640  Cardinality: 5,133,430       
                        1 INDEX FAST FULL SCAN INDEX MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV Cost: 10,026  Bytes: 554,410,440  Cardinality: 5,133,430 
                        2 MAT_VIEW ACCESS FULL MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost: 24,570  Bytes: 718,680,200  Cardinality: 5,133,430  can you please look into the issue?
    Thanks.

    According to the execution plan you posted parallelism is not taking place - no parallel operations listed
    Check the hint syntax. In particular, "PARALLEL" does not look right.
    Running queries in parallel can either help performance, hurt performance, or do nothing for performance. In your case a parallel index scan on MFIPROCESS.INDX_TEMP_CUST_AUTH_PERF_MV using the PARALLEL_INDEX hint and the PARALLEL hint specifying the table for MAT_VIEW MFIPROCESS.TEMP_CUST_AUTH_PERF_MV Cost might help, something like (untested)
    select /*+ PARALLEL_INDEX(INDX_TEMP_CST_AUTH_PERF_MV) PARALLEL(TEMP_CUST_AUTHPERF_MV) */Is query rewrite causing the MVs to be read? If so hinting the query will be tricky

  • Shared Services is taking forever to come up

    Hello Experts,
    One of my customer has an issue with Shared Services. They have intergrated Shared Services with Active Directory and want to pull users into a SS from a specific AD group. Right now its pulling all the users but Shared Services is taking forever to come up.
    Environment:
    Essbase version 9.3.1
    OS : Win 2003 server SP1
    Any inputs on this issue is appreciated. Thanks in advance
    Regards,
    Sonu

    Hi,
    Is the MSAD large, if so are you filtering down to a more specific area instead of looking at the whole AD.
    Also have you tried applying patch :- 9.3.1.0.07 which addresses -
    Shared Services made several calls to the user directory server (LDAP or MSAD) to build group and user cache, causing slow startup performance (Bug# 7144686). This fix significantly reduces the number of calls made to the user directory server to improve start up performance.
    You can download it from metalink3
    Cheers
    John
    http://john-goodwin.blogspot.com/

Maybe you are looking for