Query taking too long when using bind variable

Hi All,
There is a query in our prod DB which runs very slow (approx 2 hours) when it uses Bind Variables (using JDBC thin client), and when i try passing the variable using TOAD/SQL developer it runs fine.
Explain Plan for running Query
SELECT STATEMENT ALL_ROWSCost: 146 Bytes: 379 Cardinality: 1                                                   
     21 SORT ORDER BY Cost: 146 Bytes: 379 Cardinality: 1                                              
          20 NESTED LOOPS Cost: 145 Bytes: 379 Cardinality: 1                                         
               17 HASH JOIN Cost: 22 Bytes: 42,558 Cardinality: 123                                    
                    15 MERGE JOIN CARTESIAN Cost: 15 Bytes: 8,910 Cardinality: 27                               
                         12 FILTER                          
                              11 NESTED LOOPS OUTER Cost: 9 Bytes: 316 Cardinality: 1                     
                                   8 NESTED LOOPS OUTER Cost: 8 Bytes: 290 Cardinality: 1                
                                        5 NESTED LOOPS Cost: 6 Bytes: 256 Cardinality: 1           
                                             2 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE GDP.GDP_FX_DEALS_INCREMENTOR Cost: 4 Bytes: 28 Cardinality: 1 Partition #: 9 Partition access computed by row location     
                                                  1 INDEX RANGE SCAN INDEX GDP.GDP_FX_DEALS_INC_IDX_01 Cost: 3 Cardinality: 1
                                             4 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_FX_DEALS Cost: 2 Bytes: 228 Cardinality: 1      
                                                  3 INDEX UNIQUE SCAN INDEX (UNIQUE) GDP.GDP_FX_DEALS_KEY Cost: 1 Cardinality: 1
                                        7 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_FX_DEALS Cost: 2 Bytes: 34 Cardinality: 1           
                                             6 INDEX UNIQUE SCAN INDEX (UNIQUE) GDP.GDP_FX_DEALS_KEY Cost: 1 Cardinality: 1      
                                   10 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_COUNTERPARTIES Cost: 1 Bytes: 26 Cardinality: 1                
                                        9 INDEX UNIQUE SCAN INDEX (UNIQUE) GDP.PK_CPTY Cost: 0 Cardinality: 1           
                         14 BUFFER SORT Cost: 14 Bytes: 448 Cardinality: 32                          
                              13 TABLE ACCESS FULL TABLE GDP.GDP_CITIES Cost: 6 Bytes: 448 Cardinality: 32                     
                    16 TABLE ACCESS FULL TABLE GDP.GDP_AREAS Cost: 6 Bytes: 2,304 Cardinality: 144                               
               19 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_PORTFOLIOS Cost: 1 Bytes: 33 Cardinality: 1                                    
                    18 INDEX UNIQUE SCAN INDEX (UNIQUE) GDP.PORTFOLIOS_KEY Cost: 0 Cardinality: 1                               
Explain Plan for Slow Query
Plan
SELECT STATEMENT ALL_ROWSCost: 11,526,226 Bytes: 119,281,912 Cardinality: 314,728                                                   
     21 SORT ORDER BY Cost: 11,526,226 Bytes: 119,281,912 Cardinality: 314,728                                              
          20 HASH JOIN Cost: 11,510,350 Bytes: 119,281,912 Cardinality: 314,728                                         
               2 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_PORTFOLIOS Cost: 1,741 Bytes: 177,540 Cardinality: 5,380                                    
                    1 INDEX FULL SCAN INDEX (UNIQUE) GDP.PORTFOLIOS_KEY Cost: 14 Cardinality: 5,380                               
               19 HASH JOIN Cost: 11,507,479 Bytes: 87,932,495,360 Cardinality: 254,140,160                                    
                    3 TABLE ACCESS FULL TABLE GDP.GDP_AREAS Cost: 6 Bytes: 2,304 Cardinality: 144                               
                    18 MERGE JOIN CARTESIAN Cost: 11,506,343 Bytes: 18,602,733,930 Cardinality: 56,371,921                               
                         15 FILTER                          
                              14 HASH JOIN RIGHT OUTER Cost: 3,930,405 Bytes: 556,672,868 Cardinality: 1,761,623                     
                                   5 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_COUNTERPARTIES Cost: 6,763 Bytes: 892,580 Cardinality: 34,330                
                                        4 INDEX FULL SCAN INDEX (UNIQUE) GDP.PK_CPTY Cost: 63 Cardinality: 34,330           
                                   13 HASH JOIN OUTER Cost: 3,923,634 Bytes: 510,870,670 Cardinality: 1,761,623                
                                        10 HASH JOIN Cost: 2,096,894 Bytes: 450,975,488 Cardinality: 1,761,623           
                                             7 TABLE ACCESS BY GLOBAL INDEX ROWID TABLE GDP.GDP_FX_DEALS_INCREMENTOR Cost: 2,763 Bytes: 52,083,248 Cardinality: 1,860,116 Partition #: 14 Partition access computed by row location     
                                                  6 INDEX RANGE SCAN INDEX GDP.GDP_FX_DEALS_INC_IDX_01 Cost: 480 Cardinality: 334,821
                                             9 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_FX_DEALS Cost: 1,734,205 Bytes: 8,320,076,820 Cardinality: 36,491,565      
                                                  8 INDEX FULL SCAN INDEX (UNIQUE) GDP.GDP_FX_DEALS_KEY Cost: 104,335 Cardinality: 39,200,838
                                        12 TABLE ACCESS BY INDEX ROWID TABLE GDP.GDP_FX_DEALS Cost: 1,733,836 Bytes: 1,331,145,696 Cardinality: 39,151,344           
                                             11 INDEX FULL SCAN INDEX (UNIQUE) GDP.GDP_FX_DEALS_KEY Cost: 104,335 Cardinality: 39,200,838      
                         17 BUFFER SORT Cost: 11,499,580 Bytes: 448 Cardinality: 32                          
                              16 TABLE ACCESS FULL TABLE GDP.GDP_CITIES Cost: 4 Bytes: 448 Cardinality: 32                     
How can I avoid that.
Thanks

Hello
Could you reformat your execution plans because they aren't particularly readable. The forums allow you to preserve the formatting of code or output by putting the symbol {noformat}{noformat} before and after the section of text you want to preserve formatting for. 
If you write
{noformat}select * from v$version
{noformat}
it will be displayed asselect * from v$version
So can you run this above statement and post the output here so we know the full oracle version you are working with?  And finally, it would be really helpful to see the query you are running.  When you say it runs fine in Toad, is that when you replace the bind variables with the values or are you also using bind variables in Toad?
Cheers
David                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

Similar Messages

  • What's disadvantages when using bind variables always in java?

    Hello everyone .. Could someone tell me what's the disadvantage when using bind variable in java ? I heard it somecases since before.. Thanks in advance!

    99% of the time, you should be using bind variables. If you have columns which are highly skewed, however, you may want to consider using literals (assuming CURSOR_SHARING=EXACT), since that may allow the CBO to make a better decision.
    If you have an orders table, for example, you may have a status column that specifies whether the order is complete, in transit, or new. If you've been running for a while, 99% of your orders will be complete, so
    SELECT COUNT(*)
      FROM orders
    WHERE status = :1should do a full table scan if you specify 'COMPLETE'. If you passed in 'IN TRANSIT', though, an index scan might be more appropriate. If you want to pass in different values and get different query plans, you need to use literals. 99% of the time, though, you want the same plan, so you want to use bind variables.
    Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Query don't use the right index when using bind variables

    Hi people !
    I need some help because I have an issue with a query that don t use the right Indexes as it should
    First of all, I have mainly three tables :
    ORDER : Table that contains description for each Order (approximately 1 000 000 Records)
    ORDER_MVTS : Table that contains the tasks made (called movements) to set up each Orders
    with quantity of packages prepared for each product (approximately 10 000 000 Records)
    PRODUCT : Tables that contains the products (approximately 50 000 Records)
    When I launch the query with hard coded values, it brings back response very fast
    because it uses the right index (ORDER_DHR_VALID) which represent the date and hour of the order
    (with format 'DD/MM/YYYY HH24:MI:SS'). The selectivity for this index is good.
    NB 1: I have to use the trick " >= Trunc(date) and < trunc(date) +1 " to filter on a simple date because
    the index contains hour and minutes (I know it wasn't probably a bright idea at conception time).
    NB 2: The index on ORDER_MVTS.PRODUCT_CODE is'nt discriminating enough because there is'nt enough different products.
    It's the same for index on CUSTOMER_CODE and on MVT_TYPE so only the index on ORDER.DHR_VALID is good.
    Here is the correct explain plan when I execute the query with hard coded values :
    SELECT SUM(ORDER_MVTS.NB_PACKAGE)
    FROM ORDER_MVTS, PRODUCT, ORDER
    WHERE ORDER.DHR_VALID >= TRUNC(to_date('14/11/2008 10:04:56','DD/MM/YYYY HH24:MI:SS'))
    AND ORDER.DHR_VALID < TRUNC(to_date('14/11/2008 10:04:56','DD/MM/YYYY HH24:MI:SS')) + 1
    AND ORDER_MVTS.MVT_TYPE = 'DELIVERY'
    AND PRODUCT.CODE = ORDER_MVTS.PRODUCT_CODE
    AND ORDER_MVTS.ORDER_CODE = ORDER.CODE
    AND ORDER.CUSTOMER_CODE = 'ADIDAS'
    AND PRODUCT.CODE = 1234
    Rows Row Source Operation
    1 SORT AGGREGATE
    2 NESTED LOOPS
    4 NESTED LOOPS
    2 INDEX UNIQUE SCAN (object id 378548) --> PRODUCT_PK
    4 TABLE ACCESS BY INDEX ROWID ORDER
    777 INDEX RANGE SCAN (object id 378119) --> ORDER_DHR_VALID
    2 TABLE ACCESS BY INDEX ROWID ORDER_MVTS
    30 INDEX RANGE SCAN (object id 377784) --> ORDER_MVTS_ORDER_FK
    Now the problem is when the query is used in a Cursor with bind variables.
    It seems like Oracle don't use index on ORDER.DHR_VALID because he can't figure out that he have
    to actually filter on a short period of time (only one day).
    So Oracle uses the index on ORDER_MVTS.PRODUCT_CODE which is'nt a bright idea (it takes 10 secondes instead of just one)
    Here is the bad explain plan :
    Rows Row Source Operation
    1 SORT AGGREGATE
    2 NESTED LOOPS
    722 NESTED LOOPS
    2 INDEX UNIQUE SCAN (object id 378548) --> PRODUCT_PK
    722 TABLE ACCESS BY INDEX ROWID ORDER_MVTS
    1790 INDEX RANGE SCAN (object id 377777) --> ORDER_MVTS_PRODUCT_FK
    2 TABLE ACCESS BY INDEX ROWID ORDER
    1442 INDEX UNIQUE SCAN (object id 378439) --> ORDER_PK
    Now I have found two solutions to this problem :
    1) using a Hint to force the use of index on ORDER.DHR_VALID (with /*+ INDEX(ORDER ORDER_DHR_VALID) */ )
    2) Using Dynamic SQL and keeping the date hard coded (but not the other values except mvt_type)
    For example :
    QUERY :=
    'SELECT SUM(ORDER_MVTS.NB_PACKAGE)
    FROM ORDER_MVTS, PRODUCT, ORDER
    WHERE ORDER.DHR_VALID >= TRUNC(TO_DATE('''||To_char(P_DTE_VAL,'DD/MM/YYYY')||''',''DD/MM/YYYY'')) '||
    AND ORDER.DHR_VALID < TRUNC(TO_DATE('''||To_char(P_DTE_VAL,'DD/MM/YYYY')||''',''DD/MM/YYYY'')) + 1 '||
    AND ORDER_MVTS.MVT_TYPE = 'DELIVERY'
    AND PRODUCT.CODE = ORDER_MVTS.PRODUCT_CODE
    AND ORDER_MVTS.ORDER_CODE = ORDER.CODE
    AND ORDER.CUSTOMER_CODE = :CUSTOMER
    AND PRODUCT.CODE = :CODE ';
    These two solutions work but Number 1 is bad in theory because it uses a Hint
    and Number 2 may be difficult to code.
    So my question is : Does someone knows another solution to force the use of index ORDER_DHR_VALID that can be simple and reliable.
    Thank you very much for support
    Edited by: remaï on Apr 1, 2009 4:08 PM

    What version of oracle you have? CBO work is different in 9i and 10g.
    Usually cost based optimizer do not want to use index for >< condition with binding variables because optimizer can not use statistic to determine selectivity, and by default selectivity of <> operators is low.
    (As I remember '>' selectivity by default is 5%, you have two conditions > and <, therefore resulting selectivity will be 0.05*0.05=0.0025 as two independent events, but selectivity of other conditions
    ORDER_MVTS.MVT_TYPE = 'DELIVERY' or ORDER.CUSTOMER_CODE = 'ADIDAS' looks much better for CBO)
    The best solution I see is do not use binding variables. Actually your query looks as searching query, which executes not so often, therefore you will not have perfomance win along of skipping execution plan creation.
    Edited by: JustasVred on Apr 1, 2009 10:10 AM

  • Loop with WMI Query taking too long, need to break out if time exceeds 5 min

    I've written a script that will loop through a list of computers and run a WMI query using the Win32_Product class. I am pinging the host first to ensure its online which eliminates wasting time but the issue I'm facing is that some of the machines
    are online but the WMI Query takes too long and holds up the script. I wanted to add a timeout to the WMI query so if a particular host will not respond to the query or gets stuck the loop will break out an go to the next computer object. I've added my code
    below:
    $Computers = @()
    $computers += "BES10-BH"
    $computers += "AUTSUP-VSUS"
    $computers += "AppClus06-BH"
    $computers += "Aut01-BH"
    $computers += "AutLH-VSUS"
    $computers += "AW-MGMT01-VSUS"
    $computers += "BAMBOOAGT-VSUS"
    ## Loop through all computer objects found in $Computes Array
    $JavaInfo = @()
    FOREACH($Client in $Computers)
    ## Gather WMI installed Software info from each client queried
    Clear-Host
    Write-Host "Querying: $Client" -foregroundcolor "yellow"
    $HostCount++
    $Online = (test-connection -ComputerName ADRAP-VSUS -Count 1 -Quiet)
    IF($Online -eq "True")
    $ColItem = Get-WmiObject -Class Win32_Product -ComputerName $Client -ErrorAction SilentlyContinue | `
    Where {(($_.name -match "Java") -and (!($_.name -match "Auto|Visual")))} | `
    Select-Object Name,Version
    FOREACH($Item in $ColItem)
    ## Write Host Name as variable
    $HostNm = ($Client).ToUpper()
    ## Query Named Version of Java, if Java is not installed fill variable as "No Java Installed
    $JavaVerName = $Item.name
    IF([string]::IsNullOrEmpty($JavaVerName))
    {$JavaVerName = "No Installed"}
    ## Query Version of Java, if Java is not installed fill variable as "No Java Installed
    $JavaVer = $Item.Version
    IF([string]::IsNullOrEmpty($JavaVer))
    {$JavaVer = "Not Installed"}
    ## Create new object to organize Host,JavaName & Version
    $JavaProp = New-Object -TypeName PSObject -Property @{
    "HostName" = $HostNm
    "JavaVerName" = $JavaVerName
    "JavaVer" = $JavaVer
    ## Add new object data "JavaProp" from loop into array "JavaInfo"
    $JavaInfo += $JavaProp
    Else
    {Write-Host "$Client didn't respond, Skipping..." -foregroundcolor "Red"}

    Let me give you a bigger picture of the script. I've included the emailed table the script produces and the actual script. While running the script certain hosts get hung up when running the WMI query which causes the script to never complete. From one of
    the posts I was able to use the Get-WmiCustom function to add a timeout 0f 15 seconds and then the script will continue if it is stuck. The problem is when a host is skipped I am not aware of it because my script is not reporting the server that timed out.
    If you look at ZLBH02-VSUS highlighted in the report you can see that its reporting not installed when it should say something to the effect query hung.
    How can I add a variable in the function that will be available outside the function that I can key off of to differentiate between a host that does not have the software installed and one that failed to query?
    Script Output:
    Script:
    ## Name: JavaReportWMI.ps1 ##
    ## Requires: Power Shell 2.0 ##
    ## Created: January 06, 2015 ##
    <##> $Version = "Script Version: 1.0" <##>
    <##> $LastUpdate = "Updated: January 06, 2015" <##>
    ## Configure Compliant Java Versions Below ##
    <##> $java6 = "6.0.430" <##>
    <##> $javaSEDEVKit6 = "1.6.0.430" <##>
    <##> $java7 = "7.0.710" <##>
    <##> $javaSEDEVKit7 = "1.7.0.710" <##>
    <##> $java8 = "8.0.250" <##>
    <##> $javaSEDDEVKit8 = "1.8.0.250" <##>
    ## Import Active Directory Module
    Import-Module ActiveDirectory
    $Timeout = "False"
    Function Get-WmiCustom([string]$computername,[string]$namespace,[string]$class,[int]$timeout=15)
    $ConnectionOptions = new-object System.Management.ConnectionOptions
    $EnumerationOptions = new-object System.Management.EnumerationOptions
    $timeoutseconds = new-timespan -seconds $timeout
    $EnumerationOptions.set_timeout($timeoutseconds)
    $assembledpath = "\\" + $computername + "\" + $namespace
    #write-host $assembledpath -foregroundcolor yellow
    $Scope = new-object System.Management.ManagementScope $assembledpath, $ConnectionOptions
    $Scope.Connect()
    $querystring = "SELECT * FROM " + $class
    #write-host $querystring
    $query = new-object System.Management.ObjectQuery $querystring
    $searcher = new-object System.Management.ManagementObjectSearcher
    $searcher.set_options($EnumerationOptions)
    $searcher.Query = $querystring
    $searcher.Scope = $Scope
    trap { $_ } $result = $searcher.get()
    return $result
    ## Log time for duration clock
    $Start = Get-Date
    $StartTime = "StartTime: " + $Start.ToShortTimeString()
    ## Environmental Variables
    $QueryMode = $Args #parameter for either "Desktops" / "Servers"
    $CsvPath = "C:\Scripts\JavaReport\JavaReport" + "$QueryMode" + ".csv"
    $Date = Get-Date
    $Domain = $env:UserDomain
    $HostName = ($env:ComputerName).ToLower()
    ## Regional Settings
    ## Used for testing
    IF ($Domain -eq "abc") {$Region = "US"; $SMTPDomain = "abc.com"; `
    $ToAddress = "[email protected]"; `
    $ReplyDomain = "abc.com"; $smtpServer = "relay.abc.com"}
    ## Control Variables
    $FromAddress = "JavaReport@$Hostname.na.$SMTPDomain"
    $EmailSubject = "Java Report - $Region"
    $computers = @()
    $computers += "ZLBH02-VSUS"
    $computers += "AUTSUP-VSUS"
    $computers += "AppClus06-BH"
    $computers += "Aut01-BH"
    $computers += "AutLH-VSUS"
    $computers += "AW-MGMT01-VSUS"
    $computers += "BAMBOOAGT-VSUS"
    #>
    ## Loop through all computer objects found in $Computes Array
    $JavaInfo = @()
    FOREACH($Client in $Computers)
    ## Gather WMI installed Software info from each client queried
    Clear-Host
    Write-Host "Querying: $Client" -foregroundcolor "yellow"
    $HostCount++
    $Online = (test-connection -ComputerName ADRAP-VSUS -Count 1 -Quiet)
    IF($Online -eq "True")
    $ColItem = Get-WmiCustom -Class Win32_Product -Namespace "root\cimv2" -ComputerName $Client -ErrorAction SilentlyContinue | `
    Where {(($_.name -match "Java") -and (!($_.name -match "Auto|Visual")))} | `
    Select-Object Name,Version
    FOREACH($Item in $ColItem)
    ## Write Host Name as variable
    $HostNm = ($Client).ToUpper()
    ## Query Named Version of Java, if Java is not installed fill variable as "No Java Installed
    $JavaVerName = $Item.name
    IF([string]::IsNullOrEmpty($JavaVerName))
    {$JavaVerName = "No Installed"}
    ## Query Version of Java, if Java is not installed fill variable as "No Java Installed
    $JavaVer = $Item.Version
    IF([string]::IsNullOrEmpty($JavaVer))
    {$JavaVer = "Not Installed"}
    ## Create new object to organize Host,JavaName & Version
    $JavaProp = New-Object -TypeName PSObject -Property @{
    "HostName" = $HostNm
    "JavaVerName" = $JavaVerName
    "JavaVer" = $JavaVer
    ## Add new object data "JavaProp" from loop into array "JavaInfo"
    $JavaInfo += $JavaProp
    Else
    {Write-Host "$Client didn't respond, Skipping..." -foregroundcolor "Red"}
    #Write-Host "Host Query Count: $LoopCount" -foregroundcolor "yellow"
    ## Sort Array
    Write-Host "Starting Array" -foregroundcolor "yellow"
    $JavaInfoSorted = $JavaInfo | Sort-object HostName
    Write-Host "Starting Export CSV" -foregroundcolor "yellow"
    ## Export CSV file
    $JavaInfoSorted | export-csv -NoType $CsvPath -Force
    $Att = new-object Net.Mail.Attachment($CsvPath)
    Write-Host "Building Table Header" -foregroundcolor "yellow"
    ## Table Header
    $list = "<table border=1><font size=1.5 face=verdana color=black>"
    $list += "<tr><th><b>Host Name</b></th><th><b>Java Ver Name</b></th><th><b>Ver Number</b></th></tr>"
    Write-Host "Building HTML Table" -foregroundcolor "yellow"
    FOREACH($Item in $JavaInfoSorted)
    Write-Host "$UniqueHost" -foregroundcolor "Yellow"
    ## Alternate Table Shading between Green and White
    IF($LoopCount++ % 2 -eq 0)
    {$BK = "bgcolor='E5F5D7'"}
    ELSE
    {$BK = "bgcolor='FFFFFF'"}
    ## Set Variables
    $JVer = $Item.JavaVer
    $Jname = $Item.JavaVerName
    ## Change Non-Compliant Java Versions to red in table
    IF((($jVer -like "6.0*") -and (!($jVer -match $java6))) -or `
    (($jName -like "*Java(TM) SE Development Kit 6*") -and (!($jName -match $javaSEDEVKit6))) -or `
    (($jVer -like "7.0*") -and (!($jVer -match $java7))) -or `
    (($jName -like "*Java SE Development Kit 7*") -and (!($jName -match $javaSEDEVKit7))))
    $list += "<tr $BK style='color: #ff0000'>"
    ## Compliant Java version are displayed in black
    ELSE
    $list += "<tr $BK style='color: #000000'>"
    ## Populate table with host name variable
    $list += "<td>" + $Item."HostName" + "</td>"
    ## Populate table with Java Version Name variable
    $list += "<td>" + $Item."JavaVerName" + "</td>"
    ## Populate table with Java Versionvariable
    $list += "<td>" + $Item."JavaVer" + "</td>"
    $list += "</tr>"
    $list += "</table></font>"
    $End = Get-Date
    $EndTime = "EndTime: " + $End.ToShortTimeString()
    #$TimeDiff = New-TimeSpan -Start $StartTime -End $EndTime
    $StartTime
    $EndTime
    $TimeDiff
    Write-Host "Total Hosts:$HostCount"
    ## Email Function
    Function SendEmail
    $msg = new-object Net.Mail.MailMessage
    $smtp = new-object Net.Mail.SmtpClient($smtpServer)
    $msg.From = ($FromAddress)
    $msg.ReplyTo =($ToAddress)
    $msg.To.Add($ToAddress)
    #$msg.BCC.Add($BCCAddress)
    $msg.Attachments.Add($Att)
    $msg.Subject = ($EmailSubject)
    $msg.Body = $Body
    $msg.IsBodyHTML = $true
    $smtp.Send($msg)
    $msg.Dispose()
    ## Email Body
    $Body = $Body + @"
    <html><body><font face="verdana" size="2.5" color="black">
    <p><b>Java Report - $Region</b></p>
    <p>$list</p>
    </html></body></font>
    <html><body><font face="verdana" size="1.0" color="red">
    <p><b> Note: Items in red do not have the latest version of Java installed. Please open a ticket to have an engineer address the issue.</b></p>
    </html></body></font>
    <html><body><font face="verdana" size="2.5" color="black">
    <p>
    $StartTime<br>
    $EndTime<br>
    $TimeDiff<br>
    $HostCount<br>
    </p>
    <p>
    Run date: $Date<br>
    $Version<br>
    $LastUpdate<br>
    </p>
    </html></body></font>
    ## Send Email
    SendEmail

  • Oracle - Query taking too long (Materialized view)

    Hi,
    I am extracting billing informaiton and storing in 3 different tables... in order to show total billing (80 to 90 columns, 1 million rows per month), I've used a materialized view... I do not have indexes on 3 billing tables - do have 3 indexes on Materialized view...
    at the moment it's taking too long to query the data (running a query via toad fails and shows "Out of Memory" error message; runing a query via APEX though is providing results but taking way too long)...
    Please advice how to make the query efficient...

    tparvaiz,
    Is it possible when building your materialized view to summarize and consolidate the data?
    Out of a million rows, what would your typical user do with that amount data if they could retrieve it readily? The answer to this question may indicate if and how to summarize the data within the materialized view.
    Jeff
    Edited by: jwellsnh on Mar 25, 2010 7:02 AM

  • SQL Query taking too long

    Hello there,
    Can someone please help me:
    I have this SQL query that is taking days to complete, and they say when it runs on the former DB version 8 it used to run for 1 hour but now we're using 10g ... it's taking days ... any help with that please?
    Texas

    Texas B wrote:
    | Id  | Operation                     | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT              |                  |     1 |   119 |  3171   (2)| 00:00:39 |
    |   1 |  NESTED LOOPS                 |                  |     1 |   119 |  3171   (2)| 00:00:39 |
    |   2 |   MERGE JOIN CARTESIAN        |                  |     1 |    71 |  3097   (2)| 00:00:38 |
    |*  3 |    TABLE ACCESS BY INDEX ROWID| PS_JRNL_LN       |     1 |    41 |   388   (0)| 00:00:05 |
    |*  4 |     INDEX SKIP SCAN           | PS_JRNL_LN       |     6 |       |   388   (0)| 00:00:05 |
    |   5 |    BUFFER SORT                |                  |   116K|  3416K|  2709   (2)| 00:00:33 |
    |   6 |     TABLE ACCESS FULL         | PS_BI_HDR        |   116K|  3416K|  2709   (2)| 00:00:33 |
    |*  7 |   TABLE ACCESS BY INDEX ROWID | PS_BI_ACCT_ENTRY |     1 |    48 |    73   (0)| 00:00:01 |
    |*  8 |    INDEX RANGE SCAN           | PS_BI_ACCT_ENTRY |     1 |       |    73   (0)| 00:00:01 |
    A few comments:
    1. Please re-edit your post and add the "Predicate Information" section below the plan, so that the filter and access predicates can be seen. They're quite helpful to understand the execution plan better.
    2. You're using bind variables, therefore the EXPLAIN PLAN output is only of limited use, since EXPLAIN PLAN doesn't perform "bind variable peeking". With "bind variable peeking" the optimizer peeks at the actual values passed when determining the execution plan. If you have a histogram generated on PS_JRNL_LN.JOURNAL_ID (check DBA/ALL/USER_TAB_COLUMNS.HISTOGRAM) or the values used are "out-of-range" (less or greater than recorded min and max value of column) then you might get different execution plans depending on the actual values passed.
    3. You can get the actual execution plan(s) from the shared pool by obtaining the SQL_ID of the statement (e.g. check V$SESSION) and use the DBMS_XPLAN.DISPLAY_CURSOR function for this SQL_ID
    4. The optimizer estimates that out of the 11 million rows of PS_JRNL_LN more or less no rows corresponds to this predicate:
    A.JOURNAL_ID = :1
    AND A.JRNL_LINE_STATUS = '1'
    Since for the unknown bind variable a hard coded default selectivity of 5% is used which corresponds to a cardinality approx. 550,000 rows, the JRNL_LINE_STATUS = '1' predicate seems to be quite selective.
    Is this estimate in the right ballpark or way off?
    Due to this estimate the optimizer uses a cartesian join which could generate a very large intermediate set if the estimate is way off, e.g. if 1,000 rows are returned instead of 0 the cartesian join will already generate 1,000 * 116,000 => 116,000,000 rows.
    This row source will then be used as driving source of a nested loop which means that many times the index and table lookup to PS_BI_ACCT_ENTRY will be performed.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Query taking too long a time

    Hi friends,
    the view I have created on a table is taking toooo long a time to gimme the results
    when am trying to select * from table_name
    Can some one suggest me a solution pls
    CREATE OR REPLACE VIEW XXKDD_LATEST_SAL
    (RN, MONTH, YEAR, EMPLOYEE_NUMBER, POSITION,
    PAYROLL_NAME, DEPT, STATUS, TERMINATION_DATE, FULL_NAME,
    TOP3, BASIC_SALARY)
    AS
    select "RN","MONTH","YEAR","EMPLOYEE_NUMBER","POSITION","PAYROLL_NAME","DEPT","STATUS","TERMINATION_DATE","FULL_NAME","TOP3","BASIC_SALARY" from (
    SELECT ROW_NUMBER() OVER (PARTITION BY employee_number ORDER BY employee_number) rn, tp.*
    FROM (SELECT MONTH, YEAR, employee_number, position, payroll_name, dept, status, termination_date, full_name,
    ROW_NUMBER () OVER (PARTITION BY employee_number, basic_salary ORDER BY YEAR , MONTH) top3,
    DECODE (basic_salary,
    100000, 4500,
    24000, 1921,
    basic_salary
    ) basic_salary
    FROM kdd_pay_hr_sal_vw
    order by employee_number,year desc) tp
    WHERE top3 <= 1
    select * from XXKDD_LATEST_SAL

    Read these informative threads:
    When your query takes too long ...
    HOW TO: Post a SQL statement tuning request - template posting
    And edit your post, add relevant details like database version, execution plan etc., it is all listed in the above links.
    And use the {noformat}{noformat}-tags, to keep your examples formatted and indented and readable.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                       

  • Query taking too long, yet cost is low

    hi guys,
    I am running a query on two databases that were created the same way and have the same data.
    On one the cost is nearly a million, and it runs in a matter of a few seconds
    On the other, the cost is 40000, and it doesn't finish executing
    I have looked at the explain plan, and there is no cartesian merge join on the second query, yet it is taking so long. What can I do to investigate this?
    thanks

    Could you please post an properly formatted explain plan output using DBMS_XPLAN.DISPLAY including the "Predicate Information" section below the plan to provide more details regarding your statement. Please use the \[code\] and \[code\] tags to enhance readability of the output provided:
    In SQL*Plus:
    SET LINESIZE 130
    EXPLAIN PLAN FOR <your statement>;
    SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);Note that the package DBMS_XPLAN.DISPLAY is only available from 9i on.
    In previous versions you could run the following in SQL*Plus (on the server) instead:
    @?/rdbms/admin/utlxplsA different approach in SQL*Plus:
    SET AUTOTRACE ON EXPLAIN
    <run your statement>;will also show the execution plan.
    In order to get a better understanding where your statement spends the time you might want to turn on SQL trace as described here:
    [When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]
    and post the "tkprof" output here, too.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Performance when using bind variables

    I'm trying to show myself that bind variables improve performance (I believe it, I just want to see it).
    I've created a simple table of 100,000 records each row a single column of type integer. I populate it with a number between 1 and 100,000
    Now, with a JAVA program I delete 2,000 of the records by performing a loop and using the loop counter in my where predicate.
    My first JAVA program runs without using bind variables as follows:
    loop
    stmt.executeUpdate("delete from nobind_test where id = " + i);
    end loop
    My second JAVA program uses bind variables as follows:
    pstmt = conn.prepareStatement("delete from bind_test where id = ?");
    loop
    pstmt.setString(1, String.valueof(i));
    rs = pstmt.executeQuery();
    end loop;
    Monitoring of v$SQL shows that program one doesn't use bind variables, and program two does use bind variables.
    The trouble is that the program that does not use bind variables runs faster than the bind variable program.
    Can anyone tell me why this would be? Is my test too simple?
    Thanks.

    [email protected] wrote:
    I'm trying to show myself that bind variables improve performance (I believe it, I just want to see it).
    I've created a simple table of 100,000 records each row a single column of type integer. I populate it with a number between 1 and 100,000
    Now, with a JAVA program I delete 2,000 of the records by performing a loop and using the loop counter in my where predicate.
    Monitoring of v$SQL shows that program one doesn't use bind variables, and program two does use bind variables.
    The trouble is that the program that does not use bind variables runs faster than the bind variable program.
    Can anyone tell me why this would be? Is my test too simple?
    The point is that you have to find out where your test is spending most of the time.
    If you've just populated a table with 100,000 records and then start to delete randomly 2,000 of them, the database has to perform a full table scan for each of the records to be deleted.
    So probably most of the time is spent scanning the table over and over again, although most of blocks might already be in your database buffer cache.
    The difference between the hard parse and the soft parse of such a simple statement might be negligible compared to effort it takes to fulfill each delete execution.
    You might want to change the setup of your test: Add a primary key constraint to your test table and delete the rows using this primary key as predicate. Then the time it takes to locate the row to delete should be negligible compared to the hard parse / soft parse difference.
    You probably need to increase your iteration count because deleting 2,000 records this way probably takes too short and introduces measuring issues. Try to delete more rows, then you should be able to spot a significant and constant difference between the two approaches.
    In order to prevent any performance issues from a potentially degenerated index due to numerous DML activities, you could also just change your test case to query for a particular column of the row corresponding to your predicate rather than deleting it.
    Regards,
    Randolf
    Oracle related stuff blog:
    http://oracle-randolf.blogspot.com/
    SQLTools++ for Oracle (Open source Oracle GUI for Windows):
    http://www.sqltools-plusplus.org:7676/
    http://sourceforge.net/projects/sqlt-pp/

  • Query taking too long

    I am running a fairly complex query with several table joins
    and it is taking too long. What can I do to improve performance?
    Thanks.
    Frank

    Dan's first suggestion is key - if you are doing multiple
    table joins, you want to make sure your indexes are set up on your
    tables correctly. If you have access to the database, this should
    be your first step. Rationalize's stored procedure suggestion is
    also a great idea (again, if you have access to create and manage
    stored procedures on your DB).
    Other than than, most databases usually have some sort of SQL
    efficiency analysis tool. SQL server has one built into their Query
    Analyzer tool. I would recommend using something like that to
    streamline your SQL. Like Dan said, something as simple as the
    order of elements in your where clause might make a big
    difference.

  • Query taking too long to finish

    Hi,
    I'm running a query which is
    Delete from msg where ID IN (select ID from deletedtrans );
    It's taking too long to complete, it has been running for 24 hours already and not completed executing the query, I cancelled the query. I don't understand why it's taking too long, does anyone have any idea? I feel that this query should not take too long to complete

    That seems to be too small piece of information to comment anything.
    1. How many records are there in "deletedtrans" table ?
    2. How many records from "msg" table are expected to be deleted ?
    3. Are statistics up-to-date on "msg" and "deletedtrans" tables ?
    4. Is "ID" column defined as NULL or NOT NULL in both "msg" and "deletedtrans" tables ? (Not sure whether this will cause any problem, but...)
    5. Is this statement being executed when other users/applications are accessing/updating "msg" table ?

  • Query taking too long on Oracle9i

    Hi All
    I am running a query on our prod database (Oracle8i 8.1.7.4) and again running the same query on Test db (Oracle9i version 4). The query is taking too long(more then 10 min) in test db. Both the database are installed on the same machine IBM AIX V4 and table schema and data are same.
    Any help would be appreciated.
    Here are the results.
    FASTER ONE
    ORACLE 8i using Production
    Statistics
    864 recursive calls
    68 db block gets
    159855 consistent gets
    20297 physical reads
    0 redo size
    1310148 bytes sent via SQL*Net to client
    68552 bytes received via SQL*Net from client
    1036 SQL*Net roundtrips to/from client
    28 sorts (memory)
    1 sorts (disk)
    15525 rows processed
    SLOWER ONE
    ORACLE 9i using Test
    Statistics
    819 recursive calls
    80 db block gets
    22981568 consistent gets
    1361 physical reads
    0 redo size
    1194902 bytes sent via SQL*Net to client
    34193 bytes received via SQL*Net from client
    945 SQL*Net roundtrips to/from client
    0 sorts (memory)
    1 sorts (disk)
    14157 rows processed

    319404-
    To help us better understand the problem,
    1) Could you post your execution plan on the two different databases?
    2) Could you list indexes (if any, on these tables)?
    3) Are any of the objects in the 'from list' a view?
    If so, are you using a user defined function to create the view?
    4) Why are you using the table 'cal_instance_relationship' twice in the 'from ' clause'?
    5) Can't your query be the following?
    SELECT f.person_id, f.course_cd, cv.responsible_org_unit_cd cowner, f.fee_cal_type Sem, f.fee_ci_sequence_number seq_no,
    sua.unit_cd, uv.owner_org_unit_cd uowner, uv.supervised_contact_hours hours, 0 chg_rate, sum(f.transaction_amount) tot_fee,
    ' ' tally
    FROM unit_version uv,
    cal_instance_relationship cir1,
    chg_method_apportion cma,
    student_unit_attempt sua,
    course_version cv,
    fee_ass f
    WHERE f.fee_type = 'VET-MATFEE'
    AND f.logical_delete_dt IS NULL
    AND f.s_transaction_type IN ('ASSESSMENT', 'MANUAL ADJ')
    AND f.fee_ci_sequence_number > 400
    AND f.course_cd = cv.course_cd
    AND cv.version_number = (SELECT MAX(v.version_number) FROM course_version v
    WHERE v.course_cd = cv.course_cd)
    AND f.person_id = sua.person_id
    and f.course_cd = sua.course_cd
    AND f.fee_type = cma.fee_type
    AND f.fee_ci_sequence_number = cma.fee_ci_sequence_number
    AND cma.load_ci_sequence_number = cir1.sub_ci_sequence_number
    AND cir1.sup_cal_type = 'ACAD-YR'
    AND cir1.sub_cal_type = sua.cal_type
    AND cir1.sub_ci_sequence_number = sua.ci_sequence_number
    AND sua.unit_attempt_status NOT IN ('DUPLICATE','DISCONTIN')
    AND sua.unit_cd = uv.unit_cd
    AND sua.version_number = uv.version_number
    GROUP BY f.person_id, f.course_cd, cv.responsible_org_unit_cd , f.fee_cal_type, f.fee_ci_sequence_number,
    sua.unit_cd, uv.owner_org_unit_cd, uv.supervised_contact_hours;

  • Possible for Oracle to consider constraints when using bind variable?

    Consider the following table with a check constraint listing the possible values of the column
    create table TEST_TABLE(
    my_column varchar2(10));
    insert into TEST_TABLE select 'VALUE1' from dba_objects;
    alter table TEST_TABLE
    add constraint TEST_TABLE_CHK1
    check (my_column in ('VALUE1', 'VALUE2'));
    begin dbms_stats.gather_table_stats(ownname=>user,tabname=>'TEST_TABLE');END;Let's see the number of logical I/O's needed for the following SQL statements.
    (The value was obtained by the delta ofselect m.value from v$mystat m, v$statname n
    where name='session logical reads' and m.statistic#=n.statistic#)
    If string lateral is used:
    declare
       n number;
    begin
      select count(*) into n from test_table where my_column='VALUE1';
    end;
    Consistent Gets: 21
    declare
       n number;
    begin
      select count(*) into n from test_table where my_column='VALUE2';
    end;
    Consistent Gets: 21
    declare
       n number;
    begin
      select count(*) into n from test_table where my_column='VALUE3';
    end;
    Consistent Gets: 0Oracle can eliminate the table if it knows the queried value can't satisfy the constraint. Good.
    (Actually, the execution plan for the last query included the 'FILTER' operation.)
    However, if bind variable is used:
    declare
       n number;
       x varchar2(10);
    begin
      x := 'VALUE1';
      select count(*) into n from test_table where my_column=x;
    end;
    Consistent Gets: 21
    declare
       n number;
       x varchar2(10);
    begin
      x := 'VALUE3';
      select count(*) into n from test_table where my_column=x;
    end;
    Consistent Gets: 21Oracle can't eliminate the table using the constraint. I can understand that because bind variables are used, Oracle can't directly eliminate the table when generating the execution plan. However, is it possible to eliminate the table, or at least employ some shortcut in runtime? If not, will this be a performance loss compared with using values laterals when check constraint exists?
    (And is it possible to use autotrace on PL/SQL block in sqlplus?)
    Oracle:
         10.2.0.4 SE
         11.2.0.2 SE
    OS:
         RHEL5

    However, is it possible to eliminate the table, or at least employ some shortcut in runtime? I can't see how to do this. Oracle has a sqltext that has an embedded bind variable in it. And for this sqltext it has an execution plan in the shared pool that will be used irrespective of the actual bound values at runtime.
    Maybe in 11G, with adaptive cursor sharing / plan bind awareness, Oracle might be smart enough to introduce a second execution plan for the VALUE3 case.
    If not, will this be a performance loss compared with using values laterals when check constraint exists?Only if you submit the dumb query and search for records with VALUE3... Normally your application code would not hold/generate these queries.
    Will it?
    For columns whose values are bound by a CHECK constraint, one might even consider to never use bind variables, since very likely you will have few versions of queries that use these columns.
    Not?
    Edited by: Toon Koppelaars on Jun 22, 2011 1:20 PM

  • Csv no data found when using bind variables

    Hi,
    I have created a report, that uses 2 date variables to limit the query and rows display as they should. But clicking the csv link opens csv having one row stating 'No data found...'. If I set static values in place of variables, I get the right result. How could I use bind variables and get the right answer in csv (without coding my own function)?
    Kaja

    I am having the same problem.
    The export link opens a new session without submitting the current page. The page item values are therefor not saved to cache before the export is undertaken.
    Any report using page item bind variables in the where clause will not export correctly as the bind values are either null or those of the previous page submit.
    Any ideas on how to get round this??

  • Compile error "input line is too long" when using useLegacyAOT

    I'm using FlashBuilder 4.7 and the release of AIR 4 on Windows 7 to package for iOS...As soon as I include 3 or more ANEs I can no longer compile to iOS (and also use the new "useLegacyAOT no" command).  When I try I get an error "The input line is too long.   Compilation failed while executing : compile-abc"
    I found a couple other similar issues.  One post suggested instead of pointing to individual ANEs (in Package Contents) to point to a folder instead--but this doesn't fix the
    Another post for a slightly different problem said it was fixed in the release of AIR 4.
    Any ideas?  I wonder if I move my AIR SDK to a simple path (like "c:\air\adt.jar" instead of "C:\Program Files\Adobe\Adobe Flash Builder 4.7 (64 Bit)\eclipse\plugins\com.adobe.flash.compiler_4.7.0.349722\AIRSDK\lib\adt.jar") if it'd help. 
    Thanks in advance!

    Awesome!  I got used to the new useLegacyAOT thing very fast.  In the meantime, I have to decide whether to remove all my ANEs and test or sit through an interminable build sequence.
    What's the current plan for a new release?  Is there any way I could get a beta copy just to use during development?
    Thanks!

Maybe you are looking for