Cleaning large query volumes

We are currently trying to get rid of a large amount of unused BW queries in our system. A department wishes to put a range of queries into quarenteen to see the impact on the business.
Is it possiple to move access to a selection of queries without actually deleting them?

Do you have BW Statistics installed and switched on?
This is the easiest way to determine the actual use of queries, but beware: some reports are designed perhaps to be run once per year, and so you need to have a good publicity in your organisation to avoid repercussions.
What should have happened what that your system's architect should have created naming conventions supported by authorizations. In that way you would have no difficulty identifying reports that were "test" reports not executed in the last 6-12 months. That's what I always set up.
In any case, the way to proceed would be to identify reports by the last execution date.
Once you have a subset of all queries identify the person who created them (or in the case where your business doesn't create them) identify who uses them, and email them the list.
Ask them to confirm which ones they use with a deadline to reply.
Follow it up if you don't get a response using polite emails "Four weeks until these queries get get removed - please confirm which ones to keep" Two weeks... One Week... 3days... 2 days..1day...
Then delete them. They might complain but they were given fair warning, and as long as your management stand by this process it's very hard to argue that they weren't informed.
I worked for a very large international organisation, we had to rebuild maybe three of the 2000 odd queries that were deleted, because the user did not react to emails. No big deal in the end and we got a nice clean system.
Edited by: Christopher DCosta on Mar 24, 2011 4:53 PM

Similar Messages

  • VSS Error: "An unexpected error occurred when cleaning up snapshot volumes. Confirm that all snapped volumes are correctly re-synchronized with the original volumes."

    Hi all,
    at a customer’s site I’ve a problem with a fresh installation of Backup Exec 2014. Every backup (full or incremental) always reports the following
    error: “An unexpected error occurred when cleaning up snapshot volumes. Confirm that all snapped volumes are correctly re-synchronized with the original volumes.”.
    It’s not a Backup Exec problem itself, also backups using “Windows Server Backup” failing with the same error.
    On this site I have three servers; the error is only generated for one of them. Here’s a short overview:
    Server1: Windows Server 2012 R2, latest patchlevel, physical machine, Domain Controller and Fileserver. Backup Exec is installed on this machine,
    backup is written directly to SAS tape loader. There error is generated on this server.
    Server2: Windows Server 2008 R2, latest patchlevel, virtual machine, running on Citrix Xen Server 6.2. Used for remote desktop services, no errors
    on this server.
    Server3: Windows Server 2012 R2, latest patchlevel, virtual machine, database server with some SQL Instances, no errors on this server.
    As I said, error is reported only on server1, no matter if it is a full or an incremental backup. During the backup I found the following errors
    is the event log (translated from a german system):
    Event ID: 12293
    Volume Shadow Copy Service error: Error calling a routine on a Shadow Copy Provider. "{89300202-3cec-4981-9171-19f59559e0f2}" an error occured. Routinedetails Error calling Query(). [0x80042302] [hr = 0x80042302, unexpected component error of
    the Volume Shadow Copy Service.
    Process:
    Volume Shadow Copy polling
    Volume Shadow Copy delete
    Context:
       Executioncontext: Coordinator
       Executionkontext: Coordinator
    And
    Event ID: 8193
    Volume Shadow Copy Service error: Unexpected error calling Routine "IVssCoordinator::Query" hr = 0x8004230f, Unexpected error Volume Shadow Copy Service Provider
    Process:
    Volume Shadow Copy delete
    Context:
       Executioncontext: Coordinator
    There are some articles about this error in the knowledge base or web which does not help or do not apply to my environment for example:
    http://www.symantec.com/business/support/index?page=content&id=TECH38338&actp=search&viewlocale=en_US&searchid=1423724381707
    What I already have tried:
    Disabled Antivirus during the whole backup
    Installed latest Service Pack for Backup Exec
    Rebooted the server
    vssadmin list writers do not show any errors
    consult eventid.net for other possible solutions
    no limits set for vaa
    Anymore ideas from you guys?
    Best regards,

    Hi Shaon,
    vssadmin list providers gave the following output:
    vssadmin list providers
    vssadmin 1.1 - Verwaltungsbefehlszeilenprogramm des Volumeschattenkopie-Dienstes
    (C) Copyright 2001-2013 Microsoft Corp.
    Anbietername: "Microsoft File Share Shadow Copy provider"
       Anbietertyp: Dateifreigabe
       Anbieterkennung: {89300202-3cec-4981-9171-19f59559e0f2}
       Version: 1.0.0.1
    Anbietername: "Microsoft Software Shadow Copy provider 1.0"
       Anbietertyp: System
       Anbieterkennung: {b5946137-7b9f-4925-af80-51abd60b20d5}
       Version: 1.0.0.7
    Unfortunately theres not Symantec VSS Provider listed.
    Best regards,
    Christoph

  • CS3 PNG reading/writing failing on very large network volumes

    We are experiencing an issue with the built-in PNG file format plugin, that is consistent across multiple versions of OS X on fully-patched CS3 Design Premium. I'm hoping to hear that others have seem something similar.
    We recently deployed a 1.8TB network drive via Windows Server 2003. Since then we cannot read or write PNG files, not even a preview through the file dialog, without hanging the CS3 application interminably. We have done our testing in Photoshop and Illustrator. The same occurs opening a file directly from the Finder to these apps. We are connecting to the server via SMB. All other file operations function properly with every other format. Working with PNGs outside of the Creative Suite works fine.
    It is definitely isolated to PNG. We have tested exactly the same file on a sub-terabyte volume on exactly the same file server and the PNG files opened and saved as expected. It is only on the large terabyte+ volume that it exhibits these issues.
    There is no recovering from the error. The application beachballs and never returns to activity, no matter how long one waits.
    We have tried an alternative PNG plugin to ensure it was not some kind of special cross-platform metadata needs that somehow occur under the hood for this file format. We have tested SuperPNG as a replacement and this functions as one would expect, opening and saving files properly and without issue.
    Has anyone else seen this? We don't have drives this large locally on any of our machines that we can test this outside of the network environment.
    We are running fully-patched Creative Suite 3 Design Premium on G5- and Intel-based Macs running a mix of 10.5.x and 10.4.x, all regularly updated.
    I look forward to hearing anyone else's experiences with this setup.
    Thanks!
    Greg

    Buko--
    Thanks for the response. However, I would hope that issues relating to incompatibility would be one way in which products grow over time.
    I think it is fair to think that Adobe can be responsible enough to allow file formats to work on server volumes. No computer is an island, especially today.
    I have found a workable solution, and I am not here to fix a bug or complain to the world. I'm here in a discussion group to spark a discussion, as certainly I understand that this is not the forum from which to expect technical support.
    I remain interested in others' experiences.
    Respectfully,
    Greg

  • Item too large for volume's format fat32

    Lion os . External HD Fat32.  I try copy files and receive msg: item too large for volume's format

    Hi,
    Fat 32 file size limit is about 4 Gb.
    You should use NTFS file system.
    Cheers
    Mirek

  • I have added photos to a portfolio website I'm building and a lot of them look VERY grainy when you click on them. They are nice clean large files and can print bat high res way over 16" x 20" so whats happened to them in iWeb?

    I have added photos to a portfolio website I'm building and a lot of them look VERY grainy when you click on them. They are nice clean large files and can print bat high res way over 16" x 20" so whats happened to them in iWeb?

    When you are dealing with websites, image file size is a trade off between quality and download speed. There's not a lot of point to having high quality images if they take too long to download in the browser.
    Nowadays we also have to consider the device that the end user is viewing the images on. An image that is optimized for viewing on a large screen is total overkill and unsuitable for those using mobile devices.
    Really we should be supplying different versions of media files for different devices using @media rules in the stylesheet but this is rather outside of the scope of iWeb. If you use the built in image optimizer and the iWeb Photo template with slideshow, the application will optimize the images according to the way in which you set this function in preferences and the slideshow size will be automatically reduced for those viewing it on smaller screens.
    If you want to give your viewers the opportunity to view large, high quality images, you can supply them as a download.

  • Backup too large for volume

    I have 2 macbook pro's (120GB & 160GB) backing up to a 500GB TM.
    both were backing up just fine, however in the past month the 160GB
    macbook pro keeps getting this message.....
    "backup too large for volume?"
    and subsequently the backup fails?
    the size of the backup is less than the free space on the TM drive...
    any help?

    dave,
    *_Incremental Backups Seem Too Large!_*
    Open the Time Machine Prefs on the Mac in question. How much space does it report you have "Available"? When a backup is initiated how much space does it report you need?
    Now, consider the following, it might give you some ideas:
    Time Machine performs backups at the file level. If a single bit in a large file is changed, the WHOLE file is backed up again. This is a problem for programs that save data to monolithic virtual disk files that are modified frequently. These include Parallels, VMware Fusion, Aperture vaults, or the databases that Entourage and Thunderbird create. These should be excluded from backup using the Time Machine Preference Exclusion list. You will, however, need to backup these files manually to another external disk.
    One poster observed regarding Photoshop: “If you find yourself working with large files, you may discover that TM is suddenly backing up your scratch disk's temp files. This is useless, find out how to exclude these (I'm not actually sure here). Alternatively, turn off TM whilst you work in Photoshop.” [http://discussions.apple.com/thread.jspa?threadID=1209412]
    If you do a lot of movie editing, unless these files are excluded, expect Time Machine to treat revised versions of a single movie as entirely new files.
    If you frequently download software or video files that you only expect to keep for a short time, consider excluding the folder these are stored in from Time Machine backups.
    If you have recently created a new disk image or burned a DVD, Time Machine will target these files for backup unless they are deleted or excluded from backup.
    *Events-Based Backups*
    Time Machine does not compare file for file to see if changes have been made. If it had to rescan every file on your drive before each backup, it would not be able to perform backups as often as it does. Rather, it looks for EVENTS (fseventsd) that take place involving your files and folders. Moving/copying/deleting/saving files and folders creates events that Time Machine looks for. [http://arstechnica.com/reviews/os/mac-os-x-10-5.ars/14]
    Installing new software, upgrading existing software, or updating Mac OS X system software can create major changes in the structure of your directories. Every one of these changes is recorded by the OS as an event. Time Machine will backup every file that has an event associated with it since the installation.
    Files or folders that are simply moved or renamed are counted as NEW files or folders. If you rename any file or folder, Time Machine will back up the ENTIRE file or folder again no matter how big or small it is.
    George Schreyer describes this behavior: “If you should want to do some massive rearrangement of your disk, Time Machine will interpret the rearranged files as new files and back them up again in their new locations. Just renaming a folder will cause this to happen. This is OK if you've got lots of room on your backup disk. Eventually, Time Machine will thin those backups and the space consumed will be recovered. However, if you really want recover the space in the backup volume immediately, you can. To do this, bring a Finder window to the front and then click the Time Machine icon on the dock. This will activate the Time Machine user interface. Navigate back in time to where the old stuff exists and select it. Then pull down the "action" menu (the gear thing) and select "delete all backups" and the older stuff vanishes.” (http://www.girr.org/mac_stuff/backups.html)
    *TechTool Pro Directory Protection*
    This disk utility feature creates backup copies of your system directories. Obviously these directories are changing all the time. So, depending on how it is configured, these backup files will be changing as well which is interpreted by Time Machine as new data to backup. Excluding the folder these backups are stored in will eliminate this effect.
    *Backups WAY Too Large*
    If an initial full backup or subsequent incremental backup is tens or hundreds of Gigs larger than expected, check to see that all unwanted external hard disks are still excluded from Time Machine backups.
    This includes the Time Machine backup drive ITSELF. Normally, Time Machine is set to exclude itself by default. But on rare occasions it can forget. When your backup begins, Time Machine mounts the backup on your desktop. (For Time Capsule users it appears as a white drive icon labeled something like “Backup of (your computer)”.) If, while it is mounted, it does not show up in the Time Machine Prefs “Do not back up” list, then Time Machine will attempt to back ITSELF up. If it is not listed while the drive is mounted, then you need to add it to the list.
    *FileVault / Boot Camp / iDisk Syncing*
    Note: Leopard has changed the way it deals with FileVault disk images, so it is not necessary to exclude your Home folder if you have FileVault activated. Additionally, Time Machine ignores Boot Camp partitions as the manner in which they are formatted is incompatible. Finally, if you have your iDisk Synced to your desktop, it is not necessary to exclude the disk image file it creates as that has been changed to a sparsebundle as well in Leopard.
    If none of the above seem to apply to your case, then you may need to attempt to compress the disk image in question. We'll consider that if the above fails to explain your circumstance.
    Cheers!

  • File too large for volume format?

    I'm trying to copy a 4.6GB .m4v file from my hard disk to an 8GB USB flash drive.
    When I drag the movie in the finder, I get a dialogue telling me the file "is too large for the volume's format". That's a new one for me. If the flash drive has 8GB available, and the file is only 4.6GB in size, how is that too large? Does it have to do with the way the flash drive is formatted? It's presently formatted as MS DOS FAT 32. Is that the problem? Do I need to reformat as HFS+?  Or is it some other problem?

    The flash drive is pre-formatted FAT32 which has a maximum allowable file size of 2 GBs. Change the flash drive format for OS X:
    Drive Partition and Format
    1. Open Disk Utility in your Utilities folder.
    2. After DU loads select your hard drive (this is the entry with the mfgr.'s ID and size) from the left side list. Click on the Partition tab in the DU main window.
    3. Under the Volume Scheme heading set the number of partitions from the drop down menu to one. Click on the Options button, set the partition scheme to GUID then click on the OK button. Set the format type to Mac OS Extended (Journaled.) Click on the Apply button and wait until the process has completed.

  • How to clean large system log files?

    I believe that OS X saves a lot of system data in log files that become very large.
    I would like to clear old history logs.
    How may I view and clean system log files?

    Thank you Niel.
    I have obtained the list at /private/var/log.
    There are a lot of files in there.
    Since I am not familiar with functions of these files should I be concerned that by simply deleting all of the files in folder /private/var/log will not cause any problems? Would this action present some unintended consequences?

  • Large query doesn't broadcast

    Hi fellows,
    I have few queries that dump lots of data, sometime more than 500,000 cells record. They run ok in Analyzer, jut take little longer. But the problem is that when users broadcast this query through email, they are not getting any data in the zip file. I am guessing it is happening because of the large file. Do you guys have any suggestions, how this can be solved?
    The open Hub Destination canu2019t be created at this point and user insists to have the same amount of data without any filtration.
    Please help, if you know that can be done some how only in broadcasting.
    Thanks,

    Hello Isac,
    While broadcasting this query with large result set,are you getting any error regarding the large result set.
    You can refer SAP Note 927530 ,section "Avoiding Out-of-Memory exceptions using the "safety belt" " ,which may solve your problem.
    This note will refer you to another note which suggests to set
    following two parameters:
    BICS_DA_RESULT_SET_LIMIT_DEF - Default that can be changed, e.g. via
                                   the property dialog.
    BICS_DA_RESULT_SET_LIMIT_MAX - Maximum that is allowed to be requested.
    Try configuring both parameter values to 2 billion and check if you get the same problem again.
    I hope this helps.
    Regards,
    Archana

  • How to Improve Excel Rendering on Large Data volume BIP 11.1.1.6?

    Hi,
    On BIP 11.1.1.6, we are trying to output 70,000+ records to Excel 2007 and the rendering of the output takes about 5 - 6 minutes. PDF takes just as long. CSV takes about 30 seconds to generate. The query runs in SQL Plus/Toad in less than 1 second to return all 70,000+ rows. I have tweaked some config settings such as below but to no improvements. Has anyone encountered this issue and found any help to improve the rendering? Any help is appreciated. Thanks.
    Use BI Publisher's XSLT processor = True
    Enable scalable feature of XSLT processor = True
    Enable XSLT runtime optimization = True
    Pages cached during processing = 1000
    Enable multithreading = True
    FO Parsing Buffer Size = 70000000
    Java version on the server is 1.6...
    Temp space is over 1.5 gb

    I was facing java heap space error with one of my large report. If you are running oc4j, make sure you allocate enough heap size for JVM to process reports. In my case heap size was 512MB. After changing it to 2GB and enabling AggresiveHeap the error went away and performance improvement was amazing.
    This link has more details on how to set these JVM commandline options:
    http://docs.oracle.com/cd/B10464_05/core.904/b10379/optj2ee.htm#i1006219

  • Apple RAID Card, 4 SAS drives, 1 bootable large striped volume?

    I have an Apple RAID card and 4 SAS 250GB drives. I did the migration when I set up the RAID to give me 1 drive for my bootable OSX volume and the other 3 combined into unused unpartitioned space.
    But I want all 4 drives combined and striped for one large bootable OSX volume. I'd prefer not to have to reinstall OSX to do this.
    Do I need to buy a utility? Can I use Disk Utility or the Apple RAID Utility? I've done searches and read all the help and can't find an answer.

    Is it possible to install a third disk (300GB SAS) and to use it as a separate Volume ?
    Yes. Of course.
    How do I have to set this up using Apple Raid Utility ?
    No. A third drive should just appear on the desktop. You only need RAID utility if you want to add that drive to an array, which doesn't sound like the case here.

  • Large query result set

    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for search in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It results in
    too much memory consumtion in our ejb application. What is the best way to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

    you can think of following options
    1) paging: read only few thousands at a time and maintain a index to page
    through complete dataset
    2) caching!
    a) you can create a serialized data file in server to cache the result set
    and can use that to browse through. you may do on the fly
    compression/uncompression while sending data to client.
    b) applet based solution where caching could be in client side. Look in
    http://www.sitraka.com/software/jclass/cs_ims.html
    thanks,
    Srinivas
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Thanks Slava Imeshev,
    We already have search criteria and a limit. When records exceeds thatlimit
    then we prompt user that it may take sometime, do you want to proceed? If
    he clicks yes then we retrieve those records. This results in lot ofmemory
    consumtion.
    I was thinking if there is some way that from database I can retrieve some
    block of records at a time rather the all records of a query. I wander how
    internet search sites work, where thousnds of sites/pages match criteriaand
    client can move back & front on any page.
    Regards,
    Parvez
    "Slava Imeshev" <[email protected]> wrote in message
    news:[email protected]...
    Hi chauhan,
    You may want to narrow search criteria along with processing a
    limited number of resulting records. I.e. if the size of the result
    is bigger than a limit, you stop fetching results and notify the client
    that search criteria should be narrowed.
    HTH.
    Regards,
    Slava Imeshev
    "chauhan" <[email protected]> wrote in message
    news:[email protected]...
    Hi all,
    At the moment we have some java classes (not ejb - cmp/bmp) for
    search
    in
    our ejb application.
    Now we have a problem i.e. records have grown too high( millions ) and
    sometimes query results in retrieval of millions of records. It
    results
    in
    too much memory consumtion in our ejb application. What is the best
    way
    to
    address this issue.
    Any help will be highly appreciated.
    Thanks & regards,
    Parvez

  • OPEN cursor for large query

    OPEN cursor
    When you OPEN a cursor for a mult-row query, is there a straightforward way that you can have it only retrieve a limited number of rows at a time and then automatically delete those rows as you do the FETCH against them? I'm thinking of setting up multiple sequential cursors, and opening and closing them as the rows are processed. But I'm hoping there might be a better way.
    The problem is that I'm running out of TEMPORARY during the OPEN cursor stage.
    The application I am working on needs to work in Standard Edition and Personal Edition versions of Oracle.
    Thank you.

    Thanks - I had read the documentation before, but interpreted it differently.
    What I had read was in:
    http://download-east.oracle.com/docs/cd/B19306_01/appdev.102/b14261/sqloperations.htm#i45288
    The extract of interest was:
    Opening a Cursor
    Opening the cursor executes the query and identifies the result set, which consists of all rows that meet the query search criteria. For cursors declared using the FOR UPDATE clause, the OPEN statement also locks those rows. An example of the OPEN statement follows:
    DECLARE
    CURSOR c1 IS SELECT employee_id, last_name, job_id, salary FROM employees
    WHERE salary > 2000;
    BEGIN
    OPEN C1;
    Rows in the result set are retrieved by the FETCH statement, not when the OPEN statement is executed.
    My interpretation was that the result of the query was put into the temporary tablespace and then retrieved into the program during the FETCH.
    Assuming I was wrong, what I'm wondering now is how I can possibly be running out of temporary space during this OPEN cursor process.

  • ADFBC SQLException, large query, bind variables

    Hi,
    I'm getting an SQLException while running a Query with a dynamically built WHERE clause using named bind variables.
    This is the following exception details.
    SQLException: SQLState(null) vendor code(17110)
    java.sql.SQLException: execution completed with warning
    at oracle.jdbc.driver.DatabaseError.newSqlException(DatabaseError.java:93)
    ... etc
    SQLWarning: reason(Warning: execution completed with warning) SQLstate(null) vendor code(17110)I've tried removing the View Object and all the related components on the page, and reinstating them all, but the same Exception keeps occurring.
    The view in the db has alot of records, approximately 126,000.
    The query does execute and rows are returned successfully, but I don't like to look of this exception.
    Any help on this would be appreciated!

    Hi John,
    Below is the stack trace.
    [684] SQLException: SQLState(null) vendor code(17110)
    [685] java.sql.SQLException: execution completed with warning
    [686]      at oracle.jdbc.driver.DatabaseError.newSqlException(DatabaseError.java:93)
    [687]      at oracle.jdbc.driver.DatabaseError.newSqlException(DatabaseError.java:111)
    [688]      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:342)
    [689]      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:282)
    [690]      at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:639)
    [691]      at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:185)
    [692]      at oracle.jdbc.driver.T4CPreparedStatement.execute_for_rows(T4CPreparedStatement.java:633)
    [693]      at oracle.jdbc.driver.OracleStatement.execute_maybe_describe(OracleStatement.java:1048)
    [694]      at oracle.jdbc.driver.T4CPreparedStatement.execute_maybe_describe(T4CPreparedStatement.java:535)
    [695]      at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1126)
    [696]      at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3001)
    [697]      at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3043)
    [698]      at oracle.jbo.server.ViewObjectImpl.getQueryHitCount(ViewObjectImpl.java:2273)
    [699]      at oracle.jbo.server.ViewObjectImpl.getQueryHitCount(ViewObjectImpl.java:2227)
    [700]      at oracle.jbo.server.QueryCollection.getEstimatedRowCount(QueryCollection.java:2560)
    [701]      at oracle.jbo.server.ViewRowSetImpl.getEstimatedRowCount(ViewRowSetImpl.java:1965)
    [702]      at oracle.jbo.server.ViewObjectImpl.getEstimatedRowCount(ViewObjectImpl.java:5987)
    [703]      at oracle.adf.model.bc4j.DCJboDataControl.getEstimatedRowCount(DCJboDataControl.java:965)
    [704]      at oracle.adf.model.binding.DCIteratorBinding.getEstimatedRowCount(DCIteratorBinding.java:2969)
    [705]      at oracle.jbo.uicli.binding.JUCtrlRangeBinding.getEstimatedRowCount(JUCtrlRangeBinding.java:115)
    [706]      at oracle.adfinternal.view.faces.model.binding.FacesCtrlRangeBinding$FacesModel.getRowCount(FacesCtrlRangeBinding.java:395)
    [707]      at oracle.adf.view.faces.component.UIXCollection.getRowCount(UIXCollection.java:271)
    [708]      at oracle.adf.view.faces.model.ModelUtils.findLastIndex(ModelUtils.java:117)
    [709]      at oracle.adf.view.faces.component.TableUtils.getLast(TableUtils.java:65)
    [710]      at oracle.adf.view.faces.component.TableUtils.getLast(TableUtils.java:39)
    [711]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.table.TableUtils.getVisibleRowCount(TableUtils.java:125)
    [712]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.table.RowData.<init>(RowData.java:22)
    [713]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.table.TableRenderingContext.<init>(TableRenderingContext.java:56)
    [714]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.TableRenderer.createRenderingContext(TableRenderer.java:375)
    [715]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.TableRenderer.encodeAll(TableRenderer.java:198)
    [716]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.DesktopTableRenderer.encodeAll(DesktopTableRenderer.java:80)
    [717]      at oracle.adfinternal.view.faces.renderkit.core.CoreRenderer.encodeEnd(CoreRenderer.java:169)
    [718]      at oracle.adf.view.faces.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:624)
    [719]      at oracle.adf.view.faces.component.UIXCollection.encodeEnd(UIXCollection.java:456)
    [720]      at oracle.adfinternal.view.faces.uinode.UIComponentUINode._renderComponent(UIComponentUINode.java:317)
    [721]      at oracle.adfinternal.view.faces.uinode.UIComponentUINode.render(UIComponentUINode.java:262)
    [722]      at oracle.adfinternal.view.faces.uinode.UIComponentUINode.render(UIComponentUINode.java:239)
    [723]      at oracle.adfinternal.view.faces.ui.composite.ContextPoppingUINode$ContextPoppingRenderer.render(ContextPoppingUINode.java:224)
    [724]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:346)
    [725]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:301)
    [726]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderChild(BaseRenderer.java:412)
    [727]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderIndexedChild(BaseRenderer.java:330)
    [728]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderIndexedChild(BaseRenderer.java:222)
    [729]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderContent(BaseRenderer.java:129)
    [730]      at oracle.adfinternal.view.faces.ui.BaseRenderer.render(BaseRenderer.java:81)
    [731]      at oracle.adfinternal.view.faces.ui.laf.base.xhtml.XhtmlLafRenderer.render(XhtmlLafRenderer.java:69)
    [732]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:346)
    [733]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:301)
    [734]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderChild(BaseRenderer.java:412)
    [735]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderIndexedChild(BaseRenderer.java:330)
    [736]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderIndexedChild(BaseRenderer.java:222)
    [737]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderContent(BaseRenderer.java:129)
    [738]      at oracle.adfinternal.view.faces.ui.BaseRenderer.render(BaseRenderer.java:81)
    [739]      at oracle.adfinternal.view.faces.ui.laf.base.xhtml.XhtmlLafRenderer.render(XhtmlLafRenderer.java:69)
    [740]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:346)
    [741]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:301)
    [742]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderChild(BaseRenderer.java:412)
    [743]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderIndexedChild(BaseRenderer.java:330)
    [744]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderIndexedChild(BaseRenderer.java:222)
    [745]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderContent(BaseRenderer.java:129)
    [746]      at oracle.adfinternal.view.faces.ui.laf.base.xhtml.BorderLayoutRenderer.renderIndexedChildren(BorderLayoutRenderer.java:42)
    [747]      at oracle.adfinternal.view.faces.ui.laf.base.xhtml.BorderLayoutRenderer.renderContent(BorderLayoutRenderer.java:71)
    [748]      at oracle.adfinternal.view.faces.ui.BaseRenderer.render(BaseRenderer.java:81)
    [749]      at oracle.adfinternal.view.faces.ui.laf.base.xhtml.XhtmlLafRenderer.render(XhtmlLafRenderer.java:69)
    [750]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:346)
    [751]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:301)
    [752]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderChild(BaseRenderer.java:412)
    [753]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderIndexedChild(BaseRenderer.java:330)
    [754]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderIndexedChild(BaseRenderer.java:222)
    [755]      at oracle.adfinternal.view.faces.ui.BaseRenderer.renderContent(BaseRenderer.java:129)
    [756]      at oracle.adfinternal.view.faces.ui.BaseRenderer.render(BaseRenderer.java:81)
    [757]      at oracle.adfinternal.view.faces.ui.laf.base.xhtml.XhtmlLafRenderer.render(XhtmlLafRenderer.java:69)
    [758]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:346)
    [759]      at oracle.adfinternal.view.faces.ui.BaseUINode.render(BaseUINode.java:301)
    [760]      at oracle.adfinternal.view.faces.ui.composite.UINodeRenderer.renderWithNode(UINodeRenderer.java:90)
    [761]      at oracle.adfinternal.view.faces.ui.composite.UINodeRenderer.render(UINodeRenderer.java:36)
    [762]      at oracle.adfinternal.view.faces.ui.laf.oracle.desktop.PageLayoutRenderer.render(PageLayoutRenderer.java:76)
    [763]      at oracle.adfinternal.view.faces.uinode.UIXComponentUINode.renderInternal(UIXComponentUINode.java:177)
    [764]      at oracle.adfinternal.view.faces.uinode.UINodeRendererBase.encodeEnd(UINodeRendererBase.java:53)
    [765]      at oracle.adf.view.faces.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:624)
    [766]      at oracle.adfinternal.view.faces.renderkit.RenderUtils.encodeRecursive(RenderUtils.java:54)
    [767]      at oracle.adfinternal.view.faces.renderkit.core.CoreRenderer.encodeChild(CoreRenderer.java:242)
    [768]      at oracle.adfinternal.view.faces.renderkit.core.CoreRenderer.encodeAllChildren(CoreRenderer.java:265)
    [769]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.PanelPartialRootRenderer.renderContent(PanelPartialRootRenderer.java:65)
    [770]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.BodyRenderer.renderContent(BodyRenderer.java:117)
    [771]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.PanelPartialRootRenderer.encodeAll(PanelPartialRootRenderer.java:147)
    [772]      at oracle.adfinternal.view.faces.renderkit.core.xhtml.BodyRenderer.encodeAll(BodyRenderer.java:60)
    [773]      at oracle.adfinternal.view.faces.renderkit.core.CoreRenderer.encodeEnd(CoreRenderer.java:169)
    [774]      at oracle.adf.view.faces.component.UIXComponentBase.encodeEnd(UIXComponentBase.java:624)
    [775]      at javax.faces.webapp.UIComponentTag.encodeEnd(UIComponentTag.java:645)
    [776]      at javax.faces.webapp.UIComponentTag.doEndTag(UIComponentTag.java:568)
    [777]      at oracle.adf.view.faces.webapp.UIXComponentTag.doEndTag(UIXComponentTag.java:100)
    [778]      at _plus._gl._DynamicViews._jspService(_DynamicViews.java:1245)
    [779]      at com.orionserver.http.OrionHttpJspPage.service(OrionHttpJspPage.java:59)
    [780]      at oracle.jsp.runtimev2.JspPageTable.service(JspPageTable.java:462)
    [781]      at oracle.jsp.runtimev2.JspServlet.internalService(JspServlet.java:598)
    [782]      at oracle.jsp.runtimev2.JspServlet.service(JspServlet.java:522)
    [783]      at javax.servlet.http.HttpServlet.service(HttpServlet.java:856)
    [784]      at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:712)
    [785]      at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:369)
    [786]      at com.evermind.server.http.ServletRequestDispatcher.unprivileged_forward(ServletRequestDispatcher.java:286)
    [787]      at com.evermind.server.http.ServletRequestDispatcher.access$100(ServletRequestDispatcher.java:50)
    [788]      at com.evermind.server.http.ServletRequestDispatcher$2.oc4jRun(ServletRequestDispatcher.java:192)
    [789]      at oracle.oc4j.security.OC4JSecurity.doPrivileged(OC4JSecurity.java:283)
    [790]      at com.evermind.server.http.ServletRequestDispatcher.forward(ServletRequestDispatcher.java:197)
    [791]      at com.sun.faces.context.ExternalContextImpl.dispatch(ExternalContextImpl.java:346)
    [792]      at com.sun.faces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:152)
    [793]      at oracle.adfinternal.view.faces.application.ViewHandlerImpl.renderView(ViewHandlerImpl.java:157)
    [794]      at com.sun.faces.lifecycle.RenderResponsePhase.execute(RenderResponsePhase.java:107)
    [795]      at com.sun.faces.lifecycle.LifecycleImpl.phase(LifecycleImpl.java:245)
    [796]      at com.sun.faces.lifecycle.LifecycleImpl.render(LifecycleImpl.java:137)
    [797]      at javax.faces.webapp.FacesServlet.service(FacesServlet.java:214)
    [798]      at com.evermind.server.http.ResourceFilterChain.doFilter(ResourceFilterChain.java:64)
    [799]      at oracle.adfinternal.view.faces.webapp.AdfFacesFilterImpl._invokeDoFilter(AdfFacesFilterImpl.java:228)
    [800]      at oracle.adfinternal.view.faces.webapp.AdfFacesFilterImpl._doFilterImpl(AdfFacesFilterImpl.java:197)
    [801]      at oracle.adfinternal.view.faces.webapp.AdfFacesFilterImpl.doFilter(AdfFacesFilterImpl.java:123)
    [802]      at oracle.adf.view.faces.webapp.AdfFacesFilter.doFilter(AdfFacesFilter.java:103)
    [803]      at com.evermind.server.http.EvermindFilterChain.doFilter(EvermindFilterChain.java:15)
    [804]      at oracle.adf.model.servlet.ADFBindingFilter.doFilter(ADFBindingFilter.java:162)
    [805]      at com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:620)
    [806]      at com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:369)
    [807]      at com.evermind.server.http.HttpRequestHandler.doProcessRequest(HttpRequestHandler.java:865)
    [808]      at com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:447)
    [809]      at com.evermind.server.http.HttpRequestHandler.serveOneRequest(HttpRequestHandler.java:215)
    [810]      at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:117)
    [811]      at com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:110)
    [812]      at oracle.oc4j.network.ServerSocketReadHandler$SafeRunnable.run(ServerSocketReadHandler.java:260)
    [813]      at oracle.oc4j.network.ServerSocketAcceptHandler.procClientSocket(ServerSocketAcceptHandler.java:239)
    [814]      at oracle.oc4j.network.ServerSocketAcceptHandler.access$700(ServerSocketAcceptHandler.java:34)
    [815]      at oracle.oc4j.network.ServerSocketAcceptHandler$AcceptHandlerHorse.run(ServerSocketAcceptHandler.java:880)
    [816]      at com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:303)
    [817]      at java.lang.Thread.run(Thread.java:595)
    [818] SQLWarning: reason(Warning: execution completed with warning) SQLstate(null) vendor code(17110)Thanks for the reply.

  • Tuning large query

    Hello,
    I have a query that will produce about 3M rows. The main problem is that when I execute the explain plan for it and see the explain results, I see that the estimate time total is about 999 hours.
    Also all joins are displayed as "full scan", I understand that the "full scan" comes because of the issue that all the records in joined tables are needed in the total resultset and optimizer does not use any indexes there...
    I have query similar to this example:
    select
    sub.field1,
    sub.field2,
    sub.field3,
    case
    when something > 1 then ...
    else ....
    end something_more
    from (
    select
    field1,
    field2,
    field3,
    (select something from sometable where sometable.field=table1.field) AS something
    from table1
    left join table2 on (...)
    left join table3 on (...)
    where ...
    What my drive optimizer crazy there? Or maybe it is useful to use some kind of optimizer hint?

    Post back some of the requested information, formatted with code tags, from these threads
    [How to post a tuning request|http://forums.oracle.com/forums/post!reply.jspa?messageID=3877933]
    [When your query takes too long|http://forums.oracle.com/forums/thread.jspa?threadID=501834]

Maybe you are looking for