Oracle Text Query taking too long

When we run a query:
select docid from Tbl1 where contains(doc,'queryterm',1)>0;
on 2 million docs it runs in <2 seconds
When we run an insert into another table based on a search:
insert into Tbl2 (col1,col2) select 10,col2 from Tbl1 where rowid<2000; (10 in the select statement is a constant)
it runs in <2 seconds
Here's the kicker:
insert into Tbl2 (col1,col2) select col1,col2 from Tbl1 where contains(doc,'queryterm',1)>0;
it runs in 60 seconds and produces ~2k rows
Is there any hint that we can use to fix this?
TIA!

We've looked hard at the performance notes for Oracle Text, the Application guide and the FAQ on it.
We've dropped the index on the table being inserted, turn off logging and used the Parallel hint on the insert. There is still a bit of a disconnect between insert speed, select speed and both together. The index was built using the parallel option so the queries should be parallel if I understand the performance hints correctly.

Similar Messages

  • Oracle - Query taking too long (Materialized view)

    Hi,
    I am extracting billing informaiton and storing in 3 different tables... in order to show total billing (80 to 90 columns, 1 million rows per month), I've used a materialized view... I do not have indexes on 3 billing tables - do have 3 indexes on Materialized view...
    at the moment it's taking too long to query the data (running a query via toad fails and shows "Out of Memory" error message; runing a query via APEX though is providing results but taking way too long)...
    Please advice how to make the query efficient...

    tparvaiz,
    Is it possible when building your materialized view to summarize and consolidate the data?
    Out of a million rows, what would your typical user do with that amount data if they could retrieve it readily? The answer to this question may indicate if and how to summarize the data within the materialized view.
    Jeff
    Edited by: jwellsnh on Mar 25, 2010 7:02 AM

  • Query taking too long, yet cost is low

    hi guys,
    I am running a query on two databases that were created the same way and have the same data.
    On one the cost is nearly a million, and it runs in a matter of a few seconds
    On the other, the cost is 40000, and it doesn't finish executing
    I have looked at the explain plan, and there is no cartesian merge join on the second query, yet it is taking so long. What can I do to investigate this?
    thanks

    Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] and \[code\] tags to enhance readability of the output provided:
    In SQL*Plus:
    SET LINESIZE 130
    EXPLAIN PLAN FOR <your statement>;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
    In previous versions you could run the following in SQL*Plus (on the server) instead:
    @?/rdbms/admin/utlxplsA different approach in SQL*Plus:
    SET AUTOTRACE ON EXPLAIN
    <run your statement>;will also show the execution plan.
    In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
    [When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
    and post the "tkprof" output here, too.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Query taking too long on Oracle9i

    Hi All
    I am running a query on our prod database (Oracle8i 8.1.7.4) and again running the same query on Test db (Oracle9i version 4). The query is taking too long(more then 10 min) in test db. Both the database are installed on the same machine IBM AIX V4 and table schema and data are same.
    Any help would be appreciated.
    Here are the results.
    FASTER ONE
    ORACLE 8i using Production
    Statistics
    864 recursive calls
    68 db block gets
    159855 consistent gets
    20297 physical reads
    0 redo size
    1310148 bytes sent via SQL*Net to client
    68552 bytes received via SQL*Net from client
    1036 SQL*Net roundtrips to/from client
    28 sorts (memory)
    1 sorts (disk)
    15525 rows processed
    SLOWER ONE
    ORACLE 9i using Test
    Statistics
    819 recursive calls
    80 db block gets
    22981568 consistent gets
    1361 physical reads
    0 redo size
    1194902 bytes sent via SQL*Net to client
    34193 bytes received via SQL*Net from client
    945 SQL*Net roundtrips to/from client
    0 sorts (memory)
    1 sorts (disk)
    14157 rows processed

    319404-
    To help us better understand the problem,
    1) Could you post your execution plan on the two different databases?
    2) Could you list indexes (if any, on these tables)?
    3) Are any of the objects in the 'from list' a view?
    If so, are you using a user defined function to create the view?
    4) Why are you using the table 'cal_instance_relationship' twice in the 'from ' clause'?
    5) Can't your query be the following?
    SELECT f.person_id, f.course_cd, cv.responsible_org_unit_cd cowner, f.fee_cal_type Sem, f.fee_ci_sequence_number seq_no,
    sua.unit_cd, uv.owner_org_unit_cd uowner, uv.supervised_contact_hours hours, 0 chg_rate, sum(f.transaction_amount) tot_fee,
    ' ' tally
    FROM unit_version uv,
    cal_instance_relationship cir1,
    chg_method_apportion cma,
    student_unit_attempt sua,
    course_version cv,
    fee_ass f
    WHERE f.fee_type = 'VET-MATFEE'
    AND f.logical_delete_dt IS NULL
    AND f.s_transaction_type IN ('ASSESSMENT', 'MANUAL ADJ')
    AND f.fee_ci_sequence_number > 400
    AND f.course_cd = cv.course_cd
    AND cv.version_number = (SELECT MAX(v.version_number) FROM course_version v
    WHERE v.course_cd = cv.course_cd)
    AND f.person_id = sua.person_id
    and f.course_cd = sua.course_cd
    AND f.fee_type = cma.fee_type
    AND f.fee_ci_sequence_number = cma.fee_ci_sequence_number
    AND cma.load_ci_sequence_number = cir1.sub_ci_sequence_number
    AND cir1.sup_cal_type = 'ACAD-YR'
    AND cir1.sub_cal_type = sua.cal_type
    AND cir1.sub_ci_sequence_number = sua.ci_sequence_number
    AND sua.unit_attempt_status NOT IN ('DUPLICATE','DISCONTIN')
    AND sua.unit_cd = uv.unit_cd
    AND sua.version_number = uv.version_number
    GROUP BY f.person_id, f.course_cd, cv.responsible_org_unit_cd , f.fee_cal_type, f.fee_ci_sequence_number,
    sua.unit_cd, uv.owner_org_unit_cd, uv.supervised_contact_hours;

  • Query taking too long

    I am running a fairly complex query with several table joins
    and it is taking too long. What can I do to improve performance?
    Thanks.
    Frank

    Dan's first suggestion is key - if you are doing multiple
    table joins, you want to make sure your indexes are set up on your
    tables correctly. If you have access to the database, this should
    be your first step. Rationalize's stored procedure suggestion is
    also a great idea (again, if you have access to create and manage
    stored procedures on your DB).
    Other than than, most databases usually have some sort of SQL
    efficiency analysis tool. SQL server has one built into their Query
    Analyzer tool. I would recommend using something like that to
    streamline your SQL. Like Dan said, something as simple as the
    order of elements in your where clause might make a big
    difference.

  • Query taking too long a time

    Hi friends,
    the view I have created on a table is taking toooo long a time to gimme the results
    when am trying to select * from table_name
    Can some one suggest me a solution pls
    CREATE OR REPLACE VIEW XXKDD_LATEST_SAL
    (RN, MONTH, YEAR, EMPLOYEE_NUMBER, POSITION,
    PAYROLL_NAME, DEPT, STATUS, TERMINATION_DATE, FULL_NAME,
    TOP3, BASIC_SALARY)
    AS
    select "RN","MONTH","YEAR","EMPLOYEE_NUMBER","POSITION","PAYROLL_NAME","DEPT","STATUS","TERMINATION_DATE","FULL_NAME","TOP3","BASIC_SALARY" from (
    SELECT ROW_NUMBER() OVER (PARTITION BY employee_number ORDER BY employee_number) rn, tp.*
    FROM (SELECT MONTH, YEAR, employee_number, position, payroll_name, dept, status, termination_date, full_name,
    ROW_NUMBER () OVER (PARTITION BY employee_number, basic_salary ORDER BY YEAR , MONTH) top3,
    DECODE (basic_salary,
    100000, 4500,
    24000, 1921,
    basic_salary
    ) basic_salary
    FROM kdd_pay_hr_sal_vw
    order by employee_number,year desc) tp
    WHERE top3 <= 1
    select * from XXKDD_LATEST_SAL

    Read these informative threads:
    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting
    And edit your post, add relevant details like database version, execution plan etc., it is all listed in the above links.
    And use the {noformat}{noformat}-tags, to keep your examples formatted and indented and readable.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Query taking too long to finish

    Hi,
    I'm running a query which is
    Delete from msg where ID IN (select ID from deletedtrans );
    It's taking too long to complete, it has been running for 24 hours already and not completed executing the query, I cancelled the query. I don't understand why it's taking too long, does anyone have any idea? I feel that this query should not take too long to complete

    That seems to be too small piece of information to comment anything.
    1. How many records are there in "deletedtrans" table ?
    2. How many records from "msg" table are expected to be deleted ?
    3. Are statistics up-to-date on "msg" and "deletedtrans" tables ?
    4. Is "ID" column defined as NULL or NOT NULL in both "msg" and "deletedtrans" tables ? (Not sure whether this will cause any problem, but...)
    5. Is this statement being executed when other users/applications are accessing/updating "msg" table ?

  • Loop with WMI Query taking too long, need to break out if time exceeds 5 min

    I've written a script that will loop through a list of computers and run a WMI query using the Win32_Product class. I am pinging the host first to ensure its online which eliminates wasting time but the issue I'm facing is that some of the machines
    are online but the WMI Query takes too long and holds up the script. I wanted to add a timeout to the WMI query so if a particular host will not respond to the query or gets stuck the loop will break out an go to the next computer object. I've added my code
    below:
    $Computers = @()
    $computers += "BES10-BH"
    $computers += "AUTSUP-VSUS"
    $computers += "AppClus06-BH"
    $computers += "Aut01-BH"
    $computers += "AutLH-VSUS"
    $computers += "AW-MGMT01-VSUS"
    $computers += "BAMBOOAGT-VSUS"
    ## Loop through all computer objects found in $Computes Array
    $JavaInfo = @()
    FOREACH($Client in $Computers)
    ## Gather WMI installed Software info from each client queried
    Clear-Host
    Write-Host "Querying: $Client" -foregroundcolor "yellow"
    $HostCount++
    $Online = (test-connection -ComputerName ADRAP-VSUS -Count 1 -Quiet)
    IF($Online -eq "True")
    $ColItem = Get-WmiObject -Class Win32_Product -ComputerName $Client -ErrorAction SilentlyContinue | `
    Where {(($_.name -match "Java") -and (!($_.name -match "Auto|Visual")))} | `
    Select-Object Name,Version
    FOREACH($Item in $ColItem)
    ## Write Host Name as variable
    $HostNm = ($Client).ToUpper()
    ## Query Named Version of Java, if Java is not installed fill variable as "No Java Installed
    $JavaVerName = $Item.name
    IF([string]::IsNullOrEmpty($JavaVerName))
    {$JavaVerName = "No Installed"}
    ## Query Version of Java, if Java is not installed fill variable as "No Java Installed
    $JavaVer = $Item.Version
    IF([string]::IsNullOrEmpty($JavaVer))
    {$JavaVer = "Not Installed"}
    ## Create new object to organize Host,JavaName & Version
    $JavaProp = New-Object -TypeName PSObject -Property @{
    "HostName" = $HostNm
    "JavaVerName" = $JavaVerName
    "JavaVer" = $JavaVer
    ## Add new object data "JavaProp" from loop into array "JavaInfo"
    $JavaInfo += $JavaProp
    Else
    {Write-Host "$Client didn't respond, Skipping..." -foregroundcolor "Red"}

    Let me give you a bigger picture of the script. I've included the emailed table the script produces and the actual script. While running the script certain hosts get hung up when running the WMI query which causes the script to never complete. From one of
    the posts I was able to use the Get-WmiCustom function to add a timeout 0f 15 seconds and then the script will continue if it is stuck. The problem is when a host is skipped I am not aware of it because my script is not reporting the server that timed out.
    If you look at ZLBH02-VSUS highlighted in the report you can see that its reporting not installed when it should say something to the effect query hung.
    How can I add a variable in the function that will be available outside the function that I can key off of to differentiate between a host that does not have the software installed and one that failed to query?
    Script Output:
    Script:
    ## Name: JavaReportWMI.ps1 ##
    ## Requires: Power Shell 2.0 ##
    ## Created: January 06, 2015 ##
    <##> $Version = "Script Version: 1.0" <##>
    <##> $LastUpdate = "Updated: January 06, 2015" <##>
    ## Configure Compliant Java Versions Below ##
    <##> $java6 = "6.0.430" <##>
    <##> $javaSEDEVKit6 = "1.6.0.430" <##>
    <##> $java7 = "7.0.710" <##>
    <##> $javaSEDEVKit7 = "1.7.0.710" <##>
    <##> $java8 = "8.0.250" <##>
    <##> $javaSEDDEVKit8 = "1.8.0.250" <##>
    ## Import Active Directory Module
    Import-Module ActiveDirectory
    $Timeout = "False"
    Function Get-WmiCustom([string]$computername,[string]$namespace,[string]$class,[int]$timeout=15)
    $ConnectionOptions = new-object System.Management.ConnectionOptions
    $EnumerationOptions = new-object System.Management.EnumerationOptions
    $timeoutseconds = new-timespan -seconds $timeout
    $EnumerationOptions.set_timeout($timeoutseconds)
    $assembledpath = "\\" + $computername + "\" + $namespace
    #write-host $assembledpath -foregroundcolor yellow
    $Scope = new-object System.Management.ManagementScope $assembledpath, $ConnectionOptions
    $Scope.Connect()
    $querystring = "SELECT * FROM " + $class
    #write-host $querystring
    $query = new-object System.Management.ObjectQuery $querystring
    $searcher = new-object System.Management.ManagementObjectSearcher
    $searcher.set_options($EnumerationOptions)
    $searcher.Query = $querystring
    $searcher.Scope = $Scope
    trap { $_ } $result = $searcher.get()
    return $result
    ## Log time for duration clock
    $Start = Get-Date
    $StartTime = "StartTime: " + $Start.ToShortTimeString()
    ## Environmental Variables
    $QueryMode = $Args #parameter for either "Desktops" / "Servers"
    $CsvPath = "C:\Scripts\JavaReport\JavaReport" + "$QueryMode" + ".csv"
    $Date = Get-Date
    $Domain = $env:UserDomain
    $HostName = ($env:ComputerName).ToLower()
    ## Regional Settings
    ## Used for testing
    IF ($Domain -eq "abc") {$Region = "US"; $SMTPDomain = "abc.com"; `
    $ToAddress = "[email protected]"; `
    $ReplyDomain = "abc.com"; $smtpServer = "relay.abc.com"}
    ## Control Variables
    $FromAddress = "JavaReport@$Hostname.na.$SMTPDomain"
    $EmailSubject = "Java Report - $Region"
    $computers = @()
    $computers += "ZLBH02-VSUS"
    $computers += "AUTSUP-VSUS"
    $computers += "AppClus06-BH"
    $computers += "Aut01-BH"
    $computers += "AutLH-VSUS"
    $computers += "AW-MGMT01-VSUS"
    $computers += "BAMBOOAGT-VSUS"
    #>
    ## Loop through all computer objects found in $Computes Array
    $JavaInfo = @()
    FOREACH($Client in $Computers)
    ## Gather WMI installed Software info from each client queried
    Clear-Host
    Write-Host "Querying: $Client" -foregroundcolor "yellow"
    $HostCount++
    $Online = (test-connection -ComputerName ADRAP-VSUS -Count 1 -Quiet)
    IF($Online -eq "True")
    $ColItem = Get-WmiCustom -Class Win32_Product -Namespace "root\cimv2" -ComputerName $Client -ErrorAction SilentlyContinue | `
    Where {(($_.name -match "Java") -and (!($_.name -match "Auto|Visual")))} | `
    Select-Object Name,Version
    FOREACH($Item in $ColItem)
    ## Write Host Name as variable
    $HostNm = ($Client).ToUpper()
    ## Query Named Version of Java, if Java is not installed fill variable as "No Java Installed
    $JavaVerName = $Item.name
    IF([string]::IsNullOrEmpty($JavaVerName))
    {$JavaVerName = "No Installed"}
    ## Query Version of Java, if Java is not installed fill variable as "No Java Installed
    $JavaVer = $Item.Version
    IF([string]::IsNullOrEmpty($JavaVer))
    {$JavaVer = "Not Installed"}
    ## Create new object to organize Host,JavaName & Version
    $JavaProp = New-Object -TypeName PSObject -Property @{
    "HostName" = $HostNm
    "JavaVerName" = $JavaVerName
    "JavaVer" = $JavaVer
    ## Add new object data "JavaProp" from loop into array "JavaInfo"
    $JavaInfo += $JavaProp
    Else
    {Write-Host "$Client didn't respond, Skipping..." -foregroundcolor "Red"}
    #Write-Host "Host Query Count: $LoopCount" -foregroundcolor "yellow"
    ## Sort Array
    Write-Host "Starting Array" -foregroundcolor "yellow"
    $JavaInfoSorted = $JavaInfo | Sort-object HostName
    Write-Host "Starting Export CSV" -foregroundcolor "yellow"
    ## Export CSV file
    $JavaInfoSorted | export-csv -NoType $CsvPath -Force
    $Att = new-object Net.Mail.Attachment($CsvPath)
    Write-Host "Building Table Header" -foregroundcolor "yellow"
    ## Table Header
    $list = "<table border=1><font size=1.5 face=verdana color=black>"
    $list += "<tr><th><b>Host Name</b></th><th><b>Java Ver Name</b></th><th><b>Ver Number</b></th></tr>"
    Write-Host "Building HTML Table" -foregroundcolor "yellow"
    FOREACH($Item in $JavaInfoSorted)
    Write-Host "$UniqueHost" -foregroundcolor "Yellow"
    ## Alternate Table Shading between Green and White
    IF($LoopCount++ % 2 -eq 0)
    {$BK = "bgcolor='E5F5D7'"}
    ELSE
    {$BK = "bgcolor='FFFFFF'"}
    ## Set Variables
    $JVer = $Item.JavaVer
    $Jname = $Item.JavaVerName
    ## Change Non-Compliant Java Versions to red in table
    IF((($jVer -like "6.0*") -and (!($jVer -match $java6))) -or `
    (($jName -like "*Java(TM) SE Development Kit 6*") -and (!($jName -match $javaSEDEVKit6))) -or `
    (($jVer -like "7.0*") -and (!($jVer -match $java7))) -or `
    (($jName -like "*Java SE Development Kit 7*") -and (!($jName -match $javaSEDEVKit7))))
    $list += "<tr $BK style='color: #ff0000'>"
    ## Compliant Java version are displayed in black
    ELSE
    $list += "<tr $BK style='color: #000000'>"
    ## Populate table with host name variable
    $list += "<td>" + $Item."HostName" + "</td>"
    ## Populate table with Java Version Name variable
    $list += "<td>" + $Item."JavaVerName" + "</td>"
    ## Populate table with Java Versionvariable
    $list += "<td>" + $Item."JavaVer" + "</td>"
    $list += "</tr>"
    $list += "</table></font>"
    $End = Get-Date
    $EndTime = "EndTime: " + $End.ToShortTimeString()
    #$TimeDiff = New-TimeSpan -Start $StartTime -End $EndTime
    $StartTime
    $EndTime
    $TimeDiff
    Write-Host "Total Hosts:$HostCount"
    ## Email Function
    Function SendEmail
    $msg = new-object Net.Mail.MailMessage
    $smtp = new-object Net.Mail.SmtpClient($smtpServer)
    $msg.From = ($FromAddress)
    $msg.ReplyTo =($ToAddress)
    $msg.To.Add($ToAddress)
    #$msg.BCC.Add($BCCAddress)
    $msg.Attachments.Add($Att)
    $msg.Subject = ($EmailSubject)
    $msg.Body = $Body
    $msg.IsBodyHTML = $true
    $smtp.Send($msg)
    $msg.Dispose()
    ## Email Body
    $Body = $Body + @"
    <html><body><font face="verdana" size="2.5" color="black">
    <p><b>Java Report - $Region</b></p>
    <p>$list</p>
    </html></body></font>
    <html><body><font face="verdana" size="1.0" color="red">
    <p><b> Note: Items in red do not have the latest version of Java installed. Please open a ticket to have an engineer address the issue.</b></p>
    </html></body></font>
    <html><body><font face="verdana" size="2.5" color="black">
    <p>
    $StartTime<br>
    $EndTime<br>
    $TimeDiff<br>
    $HostCount<br>
    </p>
    <p>
    Run date: $Date<br>
    $Version<br>
    $LastUpdate<br>
    </p>
    </html></body></font>
    ## Send Email
    SendEmail

  • Query taking too long to execute on Oracle 9i

    Mark,
    If you remember, I was working on a large xml document with deep nested complex elements.
    I am trying to query the document and as such I am running a join query. The database is not able to run this query and I am wondering if there is any alternate way to rephrase this query.
    PS: I have run this query for almost 24 hrs without any result.
    Here is the query:
    select extract(value(X),'//eNest[@aSixtyFour=2][@aUnique1=//eNest[@aSixtyFour=2]/@aUnique1]/@aUnique1')
    from OracleBench_No_Schema X
    any feedback would be extremely helpful
    Thanks
    JN

    John
    I'm not sure that I can be of much more help at the moment. We are working to improive the performance of //queries over recursive structures. These enhancements will appear in future release of the product.

  • Sql query taking too long to run

    I am not sure what to do. My app takes two long to run and the reason is right in front of me, but again, I don't know what to do or where to go. (VB.Net VS 2005)
    The main part of my query takes about 15 to 20 seconds to run. When I tack on this other part it slows the response to over 2 minutes.
    Even running this other part alone takes 2+ minutes. The query as two sum functions. Is it possible, some how, that this query can run building it's results into another table and then I use this new table with the results all ready to go and join into the first part of the main query?
    I am using oracle 9i with a Sql Developer. Which I am new to. Below is the culprit: Thanks
    Select adorder.primaryorderer_client_id,
    (sum(Case When insertion.insert_date_id >= 2648 and insertion.insert_date_id < 2683
    Then insertchargedetail.Amount_insertDetail Else 0 End)) As CurRev,
    (sum(Case When insertion.insert_date_id >= 2282 and insertion.insert_date_id < 2317
    Then insertchargedetail.Amount_insertDetail Else 0 End)) As LastRev
    from Adorder
    Inner Join insertion On Adorder.id=insertion.Adorder_id
    Inner Join insertchargesummary On insertion.id=insertchargesummary.insertion_id
    Inner Join insertchargedetail On insertchargesummary.id=insertchargedetail.insertchargesummary_id
    where ((insertion.insert_date_id >= 2282 and insertion.insert_date_id < 2317)
    Or (insertion.insert_date_id >= 2648 and insertion.insert_date_id < 2683))
    group by adorder.primaryorderer_client_id;

    How to post a tuning request:
    HOW TO: Post a SQL statement tuning request - template posting

  • SQL Query taking too long

    Hello there,
    Can someone please help me:
    I have this SQL query that is taking days to complete, and they say when it runs on the former DB version 8 it used to run for 1 hour but now we're using 10g ... it's taking days ... any help with that please?
    Texas

    Texas B wrote:
    | Id  | Operation                     | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                  |     1 |   119 |  3171   (2)| 00:00:39 |
    |   1 |  NESTED LOOPS                 |                  |     1 |   119 |  3171   (2)| 00:00:39 |
    |   2 |   MERGE JOIN CARTESIAN        |                  |     1 |    71 |  3097   (2)| 00:00:38 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| PS_JRNL_LN       |     1 |    41 |   388   (0)| 00:00:05 |
    |*  4 |     INDEX SKIP SCAN           | PS_JRNL_LN       |     6 |       |   388   (0)| 00:00:05 |
    |   5 |    BUFFER SORT                |                  |   116K|  3416K|  2709   (2)| 00:00:33 |
    |   6 |     TABLE ACCESS FULL         | PS_BI_HDR        |   116K|  3416K|  2709   (2)| 00:00:33 |
    |*  7 |   TABLE ACCESS BY INDEX ROWID | PS_BI_ACCT_ENTRY |     1 |    48 |    73   (0)| 00:00:01 |
    |*  8 |    INDEX RANGE SCAN           | PS_BI_ACCT_ENTRY |     1 |       |    73   (0)| 00:00:01 |
    A few comments:
    1. Please re-edit your post and add the "Predicate Information" section below the plan, so that the filter and access predicates can be seen. They're quite helpful to understand the execution plan better.
    2. You're using bind variables, therefore the EXPLAIN PLAN output is only of limited use, since EXPLAIN PLAN doesn't perform "bind variable peeking". With "bind variable peeking" the optimizer peeks at the actual values passed when determining the execution plan. If you have a histogram generated on PS_JRNL_LN.JOURNAL_ID (check DBA/ALL/USER_TAB_COLUMNS.HISTOGRAM) or the values used are "out-of-range" (less or greater than recorded min and max value of column) then you might get different execution plans depending on the actual values passed.
    3. You can get the actual execution plan(s) from the shared pool by obtaining the SQL_ID of the statement (e.g. check V$SESSION) and use the DBMS_XPLAN.DISPLAY_CURSOR function for this SQL_ID
    4. The optimizer estimates that out of the 11 million rows of PS_JRNL_LN more or less no rows corresponds to this predicate:
    A.JOURNAL_ID = :1
    AND A.JRNL_LINE_STATUS = '1'
    Since for the unknown bind variable a hard coded default selectivity of 5% is used which corresponds to a cardinality approx. 550,000 rows, the JRNL_LINE_STATUS = '1' predicate seems to be quite selective.
    Is this estimate in the right ballpark or way off?
    Due to this estimate the optimizer uses a cartesian join which could generate a very large intermediate set if the estimate is way off, e.g. if 1,000 rows are returned instead of 0 the cartesian join will already generate 1,000 * 116,000 => 116,000,000 rows.
    This row source will then be used as driving source of a nested loop which means that many times the index and table lookup to PS_BI_ACCT_ENTRY will be performed.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • SQL query taking too long to process

    dear guys. i have this one problem, where the sql statements really took very long time to be processed. It took more than 1 minute, depending on the total data in the table. I guest this have to do with the 'count' statements. here is the code:
    $sql = "SELECT company,theID,abbs,A as Active,N as Nonactive,(A+N) as Total
    FROM(
    select distinct D.nama As company, C.domID As theID, D.abbrew As abbs,
    count(distinct case when B.ids is NOT NULL THEN A.dauserid END) As A,
    count(distinct case when B.ids is NULL THEN A.dauserid END) As N
    FROM
    tableuser A LEFT OUTER JOIN tabletranscript B on (A.dauserid=B.dauserid)
    INNER JOIN thedommember C ON(C.entitybuktiID=1 AND C.mypriority=1 AND
    C.entitybuktiID=A.dauserid)
    INNER JOIN mydomain D ON (C.domID=".$getID.")
    GROUP BY D.nama, C.domID, D.abbrew
    ORDER BY company
    Hope any of you can simplify this statements into a query that doesnt take ages to be processed.

    What yours oracle version?
    Did you gather stats?
    stats are used by the optimizer for a better assessment query plan,if yours stats is stale then query plan behave inadvertly
    Can you paste yours this specific query tracer file by using tkprof here with wait events?
    Can you avoid DISTINCT clause from urs sequel,cause DISTINCT will always require a sort and sorting slows performance?
    More or less unless if you cant go through from above statment its hard to find whats the real issue with this sequel.
    Khurram

  • Query taking too long when using bind variable

    Hi All,
    There is a query in our prod DB which runs very slow (approx 2 hours) when it uses Bind Variables (using JDBC thin client), and when i try passing the variable using TOAD/SQL developer it runs fine.
    Explain Plan for running Query
    SELECT STATEMENT ALL_ROWSCost: 146 Bytes: 379 Cardinality: 1                                                   
         21 SORT ORDER BY Cost: 146 Bytes: 379 Cardinality: 1                                              
              20 NESTED LOOPS Cost: 145 Bytes: 379 Cardinality: 1                                         
                   17 HASH JOIN Cost: 22 Bytes: 42,558 Cardinality: 123                                    
                        15 MERGE JOIN CARTESIAN Cost: 15 Bytes: 8,910 Cardinality: 27                               
                             12 FILTER                          
                                  11 NESTED LOOPS OUTER Cost: 9 Bytes: 316 Cardinality: 1                     
                                       8 NESTED LOOPS OUTER Cost: 8 Bytes: 290 Cardinality: 1                
                                            5 NESTED LOOPS Cost: 6 Bytes: 256 Cardinality: 1           
                                                 2 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE GDP.GDP_FX_DEALS_INCREMENTOR Cost: 4 Bytes: 28 Cardinality: 1 Partition #: 9 Partition access computed by row location     
                                                      1 INDEX RANGE SCAN INDEX GDP.GDP_FX_DEALS_INC_IDX_01 Cost: 3 Cardinality: 1
                                                 4 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_FX_DEALS Cost: 2 Bytes: 228 Cardinality: 1      
                                                      3 INDEX UNIQUE SCAN INDEX (UNIQUE) GDP.GDP_FX_DEALS_KEY Cost: 1 Cardinality: 1
                                            7 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_FX_DEALS Cost: 2 Bytes: 34 Cardinality: 1           
                                                 6 INDEX UNIQUE SCAN INDEX (UNIQUE) GDP.GDP_FX_DEALS_KEY Cost: 1 Cardinality: 1      
                                       10 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_COUNTERPARTIES Cost: 1 Bytes: 26 Cardinality: 1                
                                            9 INDEX UNIQUE SCAN INDEX (UNIQUE) GDP.PK_CPTY Cost: 0 Cardinality: 1           
                             14 BUFFER SORT Cost: 14 Bytes: 448 Cardinality: 32                          
                                  13 TABLE ACCESS FULL TABLE GDP.GDP_CITIES Cost: 6 Bytes: 448 Cardinality: 32                     
                        16 TABLE ACCESS FULL TABLE GDP.GDP_AREAS Cost: 6 Bytes: 2,304 Cardinality: 144                               
                   19 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_PORTFOLIOS Cost: 1 Bytes: 33 Cardinality: 1                                    
                        18 INDEX UNIQUE SCAN INDEX (UNIQUE) GDP.PORTFOLIOS_KEY Cost: 0 Cardinality: 1                               
    Explain Plan for Slow Query
    Plan
    SELECT STATEMENT ALL_ROWSCost: 11,526,226 Bytes: 119,281,912 Cardinality: 314,728                                                   
         21 SORT ORDER BY Cost: 11,526,226 Bytes: 119,281,912 Cardinality: 314,728                                              
              20 HASH JOIN Cost: 11,510,350 Bytes: 119,281,912 Cardinality: 314,728                                         
                   2 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_PORTFOLIOS Cost: 1,741 Bytes: 177,540 Cardinality: 5,380                                    
                        1 INDEX FULL SCAN INDEX (UNIQUE) GDP.PORTFOLIOS_KEY Cost: 14 Cardinality: 5,380                               
                   19 HASH JOIN Cost: 11,507,479 Bytes: 87,932,495,360 Cardinality: 254,140,160                                    
                        3 TABLE ACCESS FULL TABLE GDP.GDP_AREAS Cost: 6 Bytes: 2,304 Cardinality: 144                               
                        18 MERGE JOIN CARTESIAN Cost: 11,506,343 Bytes: 18,602,733,930 Cardinality: 56,371,921                               
                             15 FILTER                          
                                  14 HASH JOIN RIGHT OUTER Cost: 3,930,405 Bytes: 556,672,868 Cardinality: 1,761,623                     
                                       5 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_COUNTERPARTIES Cost: 6,763 Bytes: 892,580 Cardinality: 34,330                
                                            4 INDEX FULL SCAN INDEX (UNIQUE) GDP.PK_CPTY Cost: 63 Cardinality: 34,330           
                                       13 HASH JOIN OUTER Cost: 3,923,634 Bytes: 510,870,670 Cardinality: 1,761,623                
                                            10 HASH JOIN Cost: 2,096,894 Bytes: 450,975,488 Cardinality: 1,761,623           
                                                 7 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE GDP.GDP_FX_DEALS_INCREMENTOR Cost: 2,763 Bytes: 52,083,248 Cardinality: 1,860,116 Partition #: 14 Partition access computed by row location     
                                                      6 INDEX RANGE SCAN INDEX GDP.GDP_FX_DEALS_INC_IDX_01 Cost: 480 Cardinality: 334,821
                                                 9 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_FX_DEALS Cost: 1,734,205 Bytes: 8,320,076,820 Cardinality: 36,491,565      
                                                      8 INDEX FULL SCAN INDEX (UNIQUE) GDP.GDP_FX_DEALS_KEY Cost: 104,335 Cardinality: 39,200,838
                                            12 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_FX_DEALS Cost: 1,733,836 Bytes: 1,331,145,696 Cardinality: 39,151,344           
                                                 11 INDEX FULL SCAN INDEX (UNIQUE) GDP.GDP_FX_DEALS_KEY Cost: 104,335 Cardinality: 39,200,838      
                             17 BUFFER SORT Cost: 11,499,580 Bytes: 448 Cardinality: 32                          
                                  16 TABLE ACCESS FULL TABLE GDP.GDP_CITIES Cost: 4 Bytes: 448 Cardinality: 32                     
    How can I avoid that.
    Thanks

    Hello
    Could you reformat your execution plans because they aren't particularly readable. The forums allow you to preserve the formatting of code or output by putting the symbol {noformat}{noformat} before and after the section of text you want to preserve formatting for. 
    If you write
    {noformat}select * from v$version
    {noformat}
    it will be displayed asselect * from v$version
    So can you run this above statement and post the output here so we know the full oracle version you are working with?  And finally, it would be really helpful to see the query you are running.  When you say it runs fine in Toad, is that when you replace the bind variables with the values or are you also using bind variables in Toad?
    Cheers
    David                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Create table query taking too long..

    Hello experts...
    I am taking the backup of table A which consist of 135 million records...
    for this am using below query..
    create table tableA_bkup as select * from tableA;
    it is taking more than hour..still running....
    is there any other way to query fast....
    thanks in advance....

    CTAS is one of the fastest ways to do such a thing.
    Remember you duplicate data. THis means if your table holds 50GB of data then it will have to copy those 50GB of data.
    A different way could be to use EXPDP to create a backup dump file from this table data. However I'm not sure if there is a big performance difference.
    Both versions could profit from parallel execution.

  • Query taking too long to execute

    Hi All,
    I have just moved a cube from DEV to Q and loaded the data(using INIT). I have created a query to test the data. When I execute the query, the initial result is showing up quickly but when I drill down using one char, say CHAR ABC, it is taking a lot time(I am waiting from last 25 min to see the result but it is still running). The same query is not taking any time when run in T. What could be the problem. This query is my first query in Q.
    Any suggestions would be highly appreciated.
    Best Rgds,
    James.

    GO to RSRV and you will see a an option " All combined tests.
    IN that, you will see " Check data for master data". Double click that and enter your CHAR* and run the test.
    Next under transaction data " run each of the tests" for the ODS / Cube that you run the report from.
    These tests usually will point to inconsistency. You may also want to look the design of cube / ODS. In ODS, you may have to check the key fields.
    Ravi Thothadri

Maybe you are looking for

  • Performance comparison between oops reports and normaal reports

    Hi Abapers,                   Can anyone tell me that how is it better to use oops reports instesad of normal reports as there is no difference in select query in both the reports, and if you have any reports which give the same output developed in o

  • Lumira Server Installation on HANA Options

    Dear All, I have asked this Question to SAP Lumira support team , i am awaiting for their response. Lumira experts please help me with the query in case if you an answer We have ECC 6.0 running on HANA DB and now migrating BW database also to HANA DB

  • Tuple Loads using Import Manager

    Running on 7.1 SP05 using the SAP supplied Vendor Master repostory archive. Attempting to load company code data in the Vendors main table.  Company Code Data is set-up as a tuple table in Console. When defining the map in Import Manager it will only

  • Statical Indicator in EKPO.........

    I have a query, When we delete any line item of a PO then in table EKPO in field statical a indicator will be set. but In my po there are 3 line items and which are not deleted but statical indicator is showing set. can anybody suggest that whats the

  • Dependecies across different applications

    Hi dear colleagues: I would like to know if it is possible to run an application a with its context aContext , that shows an HTML page, that calls to a servlet sb belonging to an application b with context bContext: the call woulb be: bContext/sb How