Help with a correlated subquery. flawed results..

Hi,
Can anyone tell me what I am doing wrong here? I am trying to get the employees whos have above average salarys for there departments.
SELECT first_name||' '||last_name  NAME,
       department_id,
       salary
FROM employee
WHERE salary > (SELECT AVG (salary)
                FROM employee
                GROUP BY department_id);I run this and get this:
ORA-01427: single-row subquery returns more than one rowHow can I amend this subquery to work for my needs? if i leave off:
GROUP BY department_idthen it gives the average for all employees.
Thanks!
Message was edited by:
user526807

Your sub-query is not really correlated.
You should join it by deptno with exetrnal query
SELECT ename||' '||job,
       deptno,
       sal
FROM scott.emp e
WHERE sal > (SELECT AVG (sal)
                FROM scott.emp
                where deptno = e.deptno
                GROUP BY deptno);ENAME||''||JOB DEPTNO SAL
ALLEN SALESMAN      30     1600
JONES MANAGER      20     2975
BLAKE MANAGER      30     2850
SCOTT ANALYST      20     3000
KING PRESIDENT      10     5000
FORD ANALYST      20     3000

Similar Messages

  • Help with dynamic checkbox from database results

    Hi,
    I am creating some dynamic checkboxes from the results of a database read to sqlite.  I am able to get the data and the correct number of checkboxes is being created but I'm having a problem populating the labels for the checkboxes.  Here is the section of code that gets the results from the database and creates the checkboxes.
                    var result:Array = stmt.getResult().data;
                    for each( var obj:Object in result )
                        var cb:CheckBox = new CheckBox();
                        cb.label = obj.toString();
                        cb.setStyle("color", 0x00593B);
                        cb.setStyle("symbolColor", 0x000000);
                        container.contentGroup.addElement(cb);                   
    Neil

    This was answered on another forum for me.
    The line cb.label = obj.toString();
    needed to be changed to the data.
    cb.label=obj.myDatabaseFieldName;
    myDatabaseFieldName would be whatever column you are pulling the data in from and should be the actual name in the table.
    Neil

  • Help with Trigger - looping through SELECT results

    How do I loop through a SELECT query within a Custom Trigger? I can
    execute the query fine, but the PHP mysql_xxx_xxx() functions don't
    appear to work inside a custom trigger. For example:
    This ends up empty:
    $totalRows_rsRecordset = mysql_num_rows($rsRecordset);
    While this returns the correct # of records:
    $totalRows_rsRecordset = $rsRecordset->recordCount();
    I need to loop through the records like I would with
    mysql_fetch_assoc(), but those mysql_xxx_xxx() don't seem to work.
    This works great outside a custom trigger, but fails inside a custom
    trigger:
    do {
    array_push($myArray,$row_Recordset['id_usr']);
    while ($row_Recordset= mysql_fetch_assoc($Recordset));
    What am I missing?
    Alec
    Adobe Community Expert

    Although the create trigger documentation does use the word "must", you are quite free to let the trigger fire during a refresh of the materialized view. Generally, you don't want the trigger to fire during a materialized view refresh-- in a multi-master replication environment, for example, you can easily end up with a situation where a change made in A gets replicated to B, the change fired by the trigger in B gets replicated back to A, the change made by the trigger in A gets replicated back to B, in an infinite loop that quickly brings both systems to their knees. But there is nothing that prevents you from letting it run assuming you are confident that you're not going to create any problems for yourself.
    You do need to be aware, however, that there is no guarantee that your trigger will fire only for new rows. For example, it is very likely that at some point in the future, you'll need to do a full refresh of the materialized view in which case all the rows will be deleted and re-inserted (thus causing your trigger to fire for every row). And if your materialized view does anything other than a simple "SELECT * FROM table@db_link", it is possible that an incremental refresh will update a row that actually didn't change because Oracle couldn't guarantee that the row would be unchanged.
    Justin

  • Subquery results need help with output

    Requirements:
    There is request to dump every day’s data from one table into text file.
    Tables:INSPECTION_RESULTS
    NJAS
    Primary Key: RES_SYS_NO
    Parent table: INSPECTION_RESULTS
    Child table: NJAS
    Purpose:
    1. obtain output results for the MIN and MAX RES_SYS_NO from INSPECTION_RESULTS table for yesterday’s date.
    2. Create comma delimited file for following columns using above table data output.
    Script thus far:
    SELECT NJA_RES_SYS_NO||','||NJA_TEST_REC_NO||','||NJA_LIC_ST_ID||','||NJA_ETS_ID||','||NJA_SW_VER||','||NJA_TECH_ID||','||NJA_VIN||','||NJA_LIC_NO||','
    ||NJA_LIC_JUR||','||NJA_LIC_SRC||','||NJA_MODEL_YR||','||NJA_MAKE||','||NJA_MODEL||','||NJA_VHCL_TYPE||','||NJA_BODY_STYLE||','||NJA_CERT_CLASS||','||
    NJA_GVWR||','||NJA_ASM_ETW||','||NJA_ETW_SRC||','||NJA_NO_OF_CYL||','||NJA_ENG_SIZE||','||NJA_TRANS_TYPE||','||NJA_DUAL_EXH||','||NJA_FUEL_CD||','
    || NJA_VID_SYS_NO||','|| NJA_VRT_REC_NO||','|| NJA_ARMSTDS_REC_NO||','|| NJA_RSN_F_N_TESTABLE||','|| NJA_DYNO_TESTABLE||','||
    NJA_CURR_ODO_RDNG||','|| NJA_PREV_ODO_RDNG||','|| NJA_PREV_TEST_DT||','|| NJA_TEST_TYPE||','||
    NJA_TEST_START_DT||','|| NJA_TEST_END_TIME||','|| NJA_EMISS_TEST_TYPE||','|| NJA_TEST_SEQ_NO||','
    ||NJA_LOW_MILE_EXEMP||','||NJA_TIRE_DRY||','|| NJA_REL_HUMID||','|| NJA_AMB_TEMP||','||NJA_BAR_PRESS||','|| NJA_HCF||','||
    NJA_GAS_CAP_ADAP_AVL||','|| NJA_GAS_CAP_REPL||','|| NJA_OVERALL_GAS_CAP_RES
    FROM NJAS
    WHERE NJA_RES_SYS_NO IN (SELECT MIN(NJA_RES_SYS_NO) FROM INSPECTION_RESULTS WHERE NJA_RES_SYS_NO IN
    (SELECT MAX(NJA_RES_SYS_NO) FROM INSPECTION_RESULTS, (SELECT SYSDATE-1 FROM DUAL))
    It works but not sure if I am getting accurate results. Can anyone help me fine tune this subquery script? Thanks!
    Scott

    Duplicate post
    See my answer in your other posting here Re: need help with script

  • Help with intricate search and results display

    Hi All,
    I am looking for help with a problem I have, my knowledge on Numbers is limited, I have learnt a lot by trial and error but I do not know where to start on this problem.
    What I am trying to do is display result from sheet 1 onto sheet 2 when I enter the letters of the REFERENCE into cell C3. Below is an example of the result I am tring to achieve -
    SHEET 2
    Reference
    AB
    Reference
    SZone
    Parts
    LZone
    Parts
    AB10
    3
    75
    2
    100
    AB10
    3
    75
    2
    100
    AB10
    3
    75
    2
    100
    AB10
    3
    75
    2
    100
    AB11
    3
    75
    2
    100
    AB11
    3
    75
    2
    100
    AB11
    3
    75
    2
    100
    AB11
    3
    75
    2
    100
    AB12
    3
    75
    2
    100
    AB12
    3
    75
    2
    100
    AB13
    3
    75
    3
    75
    AB13
    3
    75
    3
    75
    AB14
    3
    75
    3
    75
    AB14
    3
    75
    3
    75
    AB15
    3
    75
    1
    200
    AB15
    3
    75
    1
    200
    AB16
    3
    75
    3
    75
    AB16
    3
    75
    3
    75
    AB21
    3
    75
    3
    75
    AB21
    3
    75
    3
    75
    AB22
    3
    75
    3
    75
    I have searched for AB by entering it in cell B3 on Sheet 2, an auotmatic search has been carried out on Sheet 1 and all the columns have been brought into sheet 2 with the corresponding data and placed under the COLUMN HEADERS. I have over 4000 lines on Sheet 1, although the most result that will ever be pulled through will be 150.
    When the data is pulled in I will need to do other calculation in the COLUMNS F, G and H so the data need to only be mapped to COLUMNS A to E.
    Also I want to be able to use this spreadsheet on my iPad.
    Has anybody got an idea/solution that will help.
    Thanks in advance.
    Ian

    Ian,
    We've had a recent report of troubles with large tables in the iOS version of Numbers, so you may want to consider ways to accomplish your goals without the full 4000-row set of data.
    I re-read the problem statement in your original post and see that I should have known that you were needing a solution for both platforms. But, I'm still not clear on what part you have figured out and what part you still need help with. We often refer to your second table as a "Breakout Table". There are different ways to program it. My favorite is to add a column to the main table that identifies rows that meet the transfer criteria and assigns them a serial number to help determine where they should go in the second table.
    Here's an example:
    In the new column on the right edge of the table T1, the expression is:
    =IF(ISERROR(FIND(T2 :: $A$1, A)), "", COUNT($F$1:F1))
    Note that the first cell in the Aux column is seeded with a zero.
    The expression in the body cells of the second table, T2, is:
    =INDEX(T1, MATCH(ROW()-1, T1 :: $F, 0), COLUMN())
    Note that the search term is to be entered into Cell A1 of table T2.
    Does this give you a start?
    Jerry

  • Slow MBP... Help with EtreCheck Results please?

    Problem description:
    My MBP is running incredibly slow! For instance, just typing sometimes bogs the system down, and it definitely cannot handle multitasking like it used to, or open programs quickly.
    In June of 2014 my Hard Drive crashed. It was under Apple Care and they replaced it. I lost Apple Care in July, and since June it has progressively gotten slower. When Yosemite came out, I reformatted my computer as new and restored from an external hard drive back up. I have watched the activity monitor and the RAM doesn’t seem to be under any pressure (solid green all the way across), and the CPU doesn’t seem to go that high either (50%). I have tried all the initial steps of uninstalling what I thought could be slowing it down (i.e., Norton and some other things) in addition to taking away all start up programs except dropbox.
    I'm willing to do just about anything to get this computer through another 18 months... and if someone could find a way to get 36 months out of it I'll give you a million points.
    Thanks in advance for the community's help. You guys rock.
    P.S. Had a little issue with Carbonite a while back. I think it is taken care of though.
    EtreCheck version: 2.1.7 (114)
    Report generated January 31, 2015 at 3:53:34 PM CST
    Download EtreCheck from http://etresoft.com/etrecheck
    Click the [Support] links for help with non-Apple products.
    Click the [Details] links for more information about that line.
    Click the [Adware] links for help removing adware.
    Hardware Information: ℹ️
        MacBook Pro (13-inch, Early 2011) (Technical Specifications)
        MacBook Pro - model: MacBookPro8,1
        1 2.3 GHz Intel Core i5 CPU: 2-core
        4 GB RAM Upgradeable
            BANK 0/DIMM0
                2 GB DDR3 1333 MHz ok
            BANK 1/DIMM0
                2 GB DDR3 1333 MHz ok
        Bluetooth: Old - Handoff/Airdrop2 not supported
        Wireless:  en1: 802.11 a/b/g/n
        Battery Health: Normal - Cycle count 408
    Video Information: ℹ️
        Intel HD Graphics 3000 - VRAM: 384 MB
            Color LCD 1280 x 800
    System Software: ℹ️
        OS X 10.10.2 (14C109) - Time since boot: 17:0:6
    Disk Information: ℹ️
        APPLE HDD HTS545050A7E362 disk0 : (500.11 GB)
            EFI (disk0s1) <not mounted> : 210 MB
            Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
            Macintosh HD (disk1) / : 498.88 GB (262.62 GB free)
                Core Storage: disk0s2 499.25 GB Online
        OPTIARC DVD RW AD-5970H 
    USB Information: ℹ️
        Apple Inc. FaceTime HD Camera (Built-in)
        Western Digital My Passport 070A 319.37 GB
            EFI (disk2s1) <not mounted> : 210 MB
            Yosemite Backup (disk2s2) /Volumes/Yosemite Backup : 319.03 GB (96.57 GB free)
        Apple Inc. Apple Internal Keyboard / Trackpad
        Apple Inc. BRCM2070 Hub
            Apple Inc. Bluetooth USB Host Controller
        Apple Computer, Inc. IR Receiver
    Thunderbolt Information: ℹ️
        Apple Inc. thunderbolt_bus
    Gatekeeper: ℹ️
        Mac App Store and identified developers
    Kernel Extensions: ℹ️
            /System/Library/Extensions
        [not loaded]    com.nike.sportwatch (1.0.0) [Support]
    Problem System Launch Daemons: ℹ️
        [failed]    com.apple.loginwindow.LFVTracer.plist [Details]
        [failed]    com.apple.NetBootClientHelper.plist [Details]
    Launch Agents: ℹ️
        [running]    com.brother.LOGINserver.plist [Support]
        [running]    com.carbonite.launchd.status.plist [Support]
        [invalid?]    com.examsoft.softest.plist [Support]
        [running]    com.nike.nikeplusconnect.plist [Support]
        [loaded]    com.oracle.java.Java-Updater.plist [Support]
    Launch Daemons: ℹ️
        [loaded]    com.adobe.fpsaud.plist [Support]
        [loaded]    com.carbonite.installhelper.plist [Support]
        [running]    com.carbonite.launchd.daemon.plist [Support]
        [running]    com.carbonite.launchd.monitor.plist [Support]
        [loaded]    com.carbonite.launchd.watcher.plist [Support]
        [running]    com.examsoft.softest.service.plist [Support]
        [loaded]    com.microsoft.office.licensing.helper.plist [Support]
        [loaded]    com.oracle.java.Helper-Tool.plist [Support]
        [loaded]    com.oracle.java.JavaUpdateHelper.plist [Support]
        [running]    com.wdc.SmartwareDriveService.plist [Support]
        [running]    com.wdc.WDSmartWareService.plist [Support]
    User Launch Agents: ℹ️
        [invalid?]    com.adobe.ARM.[...].plist [Support]
        [running]    com.amazon.music.plist [Support]
        [not loaded]    com.avast.install.plist [Support]
        [not loaded]    com.google.keystone.agent.plist [Support]
        [not loaded]    com.spotify.webhelper.plist [Support]
    User Login Items: ℹ️
        Carbonite    Application  (/Applications/Carbonite.app)
        Dropbox    Application  (/Applications/Dropbox.app)
    Internet Plug-ins: ℹ️
        FlashPlayer-10.6: Version: 16.0.0.296 - SDK 10.6 [Support]
        QuickTime Plugin: Version: 7.7.3
        Flash Player: Version: 16.0.0.296 - SDK 10.6 [Support]
        Default Browser: Version: 600 - SDK 10.10
        SharePointBrowserPlugin: Version: 14.4.5 - SDK 10.6 [Support]
        Silverlight: Version: 5.1.30514.0 - SDK 10.6 [Support]
        JavaAppletPlugin: Version: Java 8 Update 31 Check version
    Safari Extensions: ℹ️
        WasteNoTime [Installed]
        wrc [Installed]
        Ebates Cash Back [Installed]
        Better Facebook [Installed]
        Norton Internet Security [Cached]
        AdBlock [Installed]
        Awesome Screenshot [Installed]
    3rd Party Preference Panes: ℹ️
        Flash Player  [Support]
        Java  [Support]
    Time Machine: ℹ️
        Skip System Files: NO
        Mobile backups: ON
        Auto backup: YES
        Volumes being backed up:
            Macintosh HD: Disk size: 498.88 GB Disk used: 236.26 GB
        Destinations:
            Yosemite Backup [Local]
            Total size: 319.03 GB
            Total number of backups: 16
            Oldest backup: 2015-01-17 12:02:39 +0000
            Last backup: 2015-01-18 03:39:38 +0000
            Size of backup disk: Too small
                Backup size 319.03 GB < (Disk used 236.26 GB X 3)
    Top Processes by CPU: ℹ️
            13%    WindowServer
             8%    com.apple.WebKit.Networking
             7%    CarboniteDaemon
             4%    Safari
             1%    hidd
    Top Processes by Memory: ℹ️
        155 MB    Safari
        64 MB    ocspd
        52 MB    Finder
        43 MB    mds
        43 MB    Mail
    Virtual Memory Information: ℹ️
        30 MB    Free RAM
        1.16 GB    Active RAM
        1.14 GB    Inactive RAM
        1.07 GB    Wired RAM
        15.81 GB    Page-ins
        1.33 GB    Page-outs
    Diagnostics Information: ℹ️
        Jan 31, 2015, 03:58:22 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155822_[redacted ].crash
        Jan 31, 2015, 03:58:10 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155810_[redacted ].crash
        Jan 31, 2015, 03:58:00 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155800_[redacted ].crash
        Jan 31, 2015, 03:57:50 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155750_[redacted ].crash
        Jan 31, 2015, 03:57:40 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155740_[redacted ].crash
        Jan 31, 2015, 03:57:29 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155729_[redacted ].crash
        Jan 31, 2015, 03:57:21 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155721_[redacted ].crash
        Jan 31, 2015, 03:57:08 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155708_[redacted ].crash
        Jan 31, 2015, 03:56:58 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155658_[redacted ].crash
        Jan 31, 2015, 03:56:48 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155648_[redacted ].crash
        Jan 31, 2015, 03:56:38 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155638_[redacted ].crash
        Jan 31, 2015, 03:56:27 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155627_[redacted ].crash
        Jan 31, 2015, 03:56:18 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155618_[redacted ].crash
        Jan 31, 2015, 03:56:06 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155606_[redacted ].crash
        Jan 31, 2015, 03:55:56 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155556_[redacted ].crash
        Jan 31, 2015, 03:55:46 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155546_[redacted ].crash
        Jan 31, 2015, 03:55:36 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155536_[redacted ].crash
        Jan 31, 2015, 03:55:25 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155525_[redacted ].crash
        Jan 31, 2015, 03:55:18 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155518_[redacted ].crash
        Jan 31, 2015, 03:55:05 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-155505_[redacted ].crash
        Jan 31, 2015, 03:50:25 PM    /Library/Logs/DiagnosticReports/CarboniteDaemon_2015-01-31-155025_[redacted].cp u_resource.diag [Details]
        Jan 31, 2015, 03:26:27 PM    /Library/Logs/DiagnosticReports/CarboniteDaemon_2015-01-31-152627_[redacted].cr ash
        Jan 31, 2015, 12:21:25 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122125_[redacted ].crash
        Jan 31, 2015, 12:21:15 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122115_[redacted ].crash
        Jan 31, 2015, 12:21:04 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122104_[redacted ].crash
        Jan 31, 2015, 12:20:54 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122054_[redacted ].crash
        Jan 31, 2015, 12:20:44 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122044_[redacted ].crash
        Jan 31, 2015, 12:20:35 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122035_[redacted ].crash
        Jan 31, 2015, 12:20:23 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122023_[redacted ].crash
        Jan 31, 2015, 12:20:13 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122013_[redacted ].crash
        Jan 31, 2015, 12:20:03 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-122003_[redacted ].crash
        Jan 31, 2015, 12:19:53 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121953_[redacted ].crash
        Jan 31, 2015, 12:19:43 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121943_[redacted ].crash
        Jan 31, 2015, 12:19:33 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121933_[redacted ].crash
        Jan 31, 2015, 12:19:22 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121922_[redacted ].crash
        Jan 31, 2015, 12:19:12 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121912_[redacted ].crash
        Jan 31, 2015, 12:19:01 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121901_[redacted ].crash
        Jan 31, 2015, 12:18:41 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121841_[redacted ].crash
        Jan 31, 2015, 12:18:32 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121832_[redacted ].crash
        Jan 31, 2015, 12:18:20 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121820_[redacted ].crash
        Jan 31, 2015, 12:18:10 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121810_[redacted ].crash
        Jan 31, 2015, 10:51:16 AM    /Library/Logs/DiagnosticReports/LegacyFileVaultMessageTracer_2015-01-31-105116_ [redacted].crash
        Jan 31, 2015, 10:13:15 AM    /Library/Logs/DiagnosticReports/CarboniteDaemon_2015-01-31-101315_[redacted].cp u_resource.diag [Details]
        Jan 30, 2015, 10:54:15 PM    Self test - passed
        Jan 30, 2015, 10:40:00 PM    /Library/Logs/DiagnosticReports/EvernoteHelper_2015-01-30-224000_[redacted].cra sh
        Jan 30, 2015, 10:33:40 PM    /Library/Logs/DiagnosticReports/CarboniteDaemon_2015-01-30-223340_[redacted].cp u_resource.diag [Details]
        Jan 30, 2015, 10:24:35 PM    /Library/Logs/DiagnosticReports/CarboniteDaemon_2015-01-30-222435_[redacted].cr ash
        Jan 30, 2015, 10:23:40 PM    /Library/Logs/DiagnosticReports/Evernote_2015-01-30-222340_[redacted].hang
        Jan 30, 2015, 10:23:39 PM    /Library/Logs/DiagnosticReports/Microsoft Office Setup Assistant_2015-01-30-222339_[redacted].hang
        Jan 30, 2015, 10:23:39 PM    /Library/Logs/DiagnosticReports/Microsoft Excel_2015-01-30-222339_[redacted].hang
        Jan 30, 2015, 03:40:00 PM    /Library/Logs/DiagnosticReports/CarboniteDaemon_2015-01-30-154000_[redacted].cp u_resource.diag [Details]
        Jan 30, 2015, 03:03:51 PM    /Library/Logs/DiagnosticReports/CarboniteDaemon_2015-01-30-150351_[redacted].cr ash
        Jan 30, 2015, 12:34:22 PM    /Library/Logs/DiagnosticReports/Carbonite_2015-01-30-123422_[redacted].hang
        Jan 30, 2015, 02:36:11 AM    /Library/Logs/DiagnosticReports/SymDaemon_2015-01-30-023611_[redacted].crash
        Jan 30, 2015, 12:42:20 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004220_[redacted ].crash
        Jan 30, 2015, 12:42:10 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004210_[redacted ].crash
        Jan 30, 2015, 12:42:00 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004200_[redacted ].crash
        Jan 30, 2015, 12:41:50 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004150_[redacted ].crash
        Jan 30, 2015, 12:41:40 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004140_[redacted ].crash
        Jan 30, 2015, 12:41:29 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004129_[redacted ].crash
        Jan 30, 2015, 12:41:19 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004119_[redacted ].crash
        Jan 30, 2015, 12:41:09 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004109_[redacted ].crash
        Jan 30, 2015, 12:40:59 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004059_[redacted ].crash
        Jan 30, 2015, 12:40:49 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004049_[redacted ].crash
        Jan 30, 2015, 12:40:38 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004038_[redacted ].crash
        Jan 30, 2015, 12:40:28 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004028_[redacted ].crash
        Jan 30, 2015, 12:40:18 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004018_[redacted ].crash
        Jan 30, 2015, 12:40:08 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-004008_[redacted ].crash
        Jan 30, 2015, 12:39:58 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-003958_[redacted ].crash
        Jan 30, 2015, 12:39:47 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-003947_[redacted ].crash
        Jan 30, 2015, 12:39:37 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-003937_[redacted ].crash
        Jan 30, 2015, 12:39:27 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-003927_[redacted ].crash
        Jan 30, 2015, 12:39:17 AM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-30-003917_[redacted ].crash
        Jan 29, 2015, 09:58:21 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215821_[redacted ].crash
        Jan 29, 2015, 09:58:11 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215811_[redacted ].crash
        Jan 29, 2015, 09:58:01 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215801_[redacted ].crash
        Jan 29, 2015, 09:57:52 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215752_[redacted ].crash
        Jan 29, 2015, 09:57:40 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215740_[redacted ].crash
        Jan 29, 2015, 09:57:30 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215730_[redacted ].crash
        Jan 29, 2015, 09:57:20 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215720_[redacted ].crash
        Jan 29, 2015, 09:57:09 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215709_[redacted ].crash
        Jan 29, 2015, 09:56:59 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215659_[redacted ].crash
        Jan 29, 2015, 09:56:49 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215649_[redacted ].crash
        Jan 29, 2015, 09:56:39 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215639_[redacted ].crash
        Jan 29, 2015, 09:56:29 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215629_[redacted ].crash
        Jan 29, 2015, 09:56:18 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215618_[redacted ].crash
        Jan 29, 2015, 09:56:08 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215608_[redacted ].crash
        Jan 29, 2015, 09:55:58 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215558_[redacted ].crash
        Jan 29, 2015, 09:55:48 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215548_[redacted ].crash
        Jan 29, 2015, 09:55:37 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215537_[redacted ].crash
        Jan 29, 2015, 09:55:27 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215527_[redacted ].crash
        Jan 29, 2015, 09:55:17 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-29-215517_[redacted ].crash
        Jan 29, 2015, 08:01:01 PM    /Library/Logs/DiagnosticReports/SymDaemon_2015-01-29-200101_[redacted].cpu_reso urce.diag [Details]
        Jan 29, 2015, 07:37:46 PM    /Library/Logs/DiagnosticReports/CarboniteDaemon_2015-01-29-193746_[redacted].cp u_resource.diag [Details]
        Jan 28, 2015, 04:51:10 PM    /Library/Logs/DiagnosticReports/SymDaemon_2015-01-28-165110_[redacted].cpu_reso urce.diag [Details]
        Jan 13, 2015, 09:03:09 PM    /Library/Logs/DiagnosticReports/Kernel_2015-01-13-210309_[redacted].panic [Details]
        Jan 31, 2015, 12:18:51 PM    /Library/Logs/DiagnosticReports/NetBootClientHelper_2015-01-31-121851_[redacted ].crash

    I use the freeware, EasyFind 4.9.3, to search for and delete unwanted files.
    Uninstalling Software: The Basics
    Most OS X applications are completely self-contained "packages" that can be uninstalled by simply dragging the application to the Trash.  Applications may create preference files that are stored in the /Home/Library/Preferences/ folder.  Although they do nothing once you delete the associated application, they do take up some disk space.  If you want you can look for them in the above location and delete them, too.
    Some applications may install an uninstaller program that can be used to remove the application.  In some cases the uninstaller may be part of the application's installer, and is invoked by clicking on a Customize button that will appear during the install process.
    Some applications may install components in the /Home/Library/Applications Support/ folder.  You can also check there to see if the application has created a folder.  You can also delete the folder that's in the Applications Support folder.  Again, they don't do anything but take up disk space once the application is trashed.
    Some applications may install a startupitem or a Log In item.  Startupitems are usually installed in the /Library/StartupItems/ folder and less often in the /Home/Library/StartupItems/ folder.  Log In Items are set in the Accounts preferences.  Open System Preferences, click on the Accounts icon, then click on the LogIn Items tab.  Locate the item in the list for the application you want to remove and click on the "-" button to delete it from the list.
    Some software use startup daemons or agents that are a new feature of the OS.  Look for them in /Library/LaunchAgents/ and /Library/LaunchDaemons/ or in /Home/Library/LaunchAgents/.
    If an application installs any other files the best way to track them down is to do a Finder search using the application name or the developer name as the search term.  Unfortunately Spotlight will not look in certain folders by default.  You can modify Spotlight's behavior or use a third-party search utility, EasyFind, instead.
    Some applications install a receipt in the /Library/Receipts/ folder.  Usually with the same name as the program or the developer.  The item generally has a ".pkg" extension.  Be sure you also delete this item as some programs use it to determine if it's already installed.
    There are many utilities that can uninstall applications.  Here is a selection:
        1. AppZapper
        2. AppDelete
        3. Automaton
        4. Hazel
        5. AppCleaner
        6. CleanApp
        7. iTrash
        8. Amnesia
        9. Uninstaller
      10. Spring Cleaning
    For more information visit The XLab FAQs and read the FAQ on removing software.

  • JDOQL Correlated Subquery - Bad SQL

    Hi,
    When I execute a JDOQL correlated subquery, the generated SQL is either
    invalid or incorrect. Exactly what happens depends on the exact query, and
    on the target database type, but I believe it all stems from the same
    problem, which has to do with table aliasing.
    If you need further details to reproduce this, please let me know. I'll be
    glad to help in any way I can to get this situation remedied quickly, as I
    am depending on this functionality. I have a test application to
    demonstrate the problem.
    I'm using Kodo 3.3.3 and application identity.
    Paul Mogren
    CommerceHub

    For the record, this is in part due to a bug in Kodo's SQL92 joining.
    See http://bugzilla.solarmetric.com/show_bug.cgi?id=1156
    -Patrick
    Paul Mogren wrote:
    Certainly... Here's a simple example using Microsoft's JDBC Driver for SQL
    Server 2000, and kodo.jdbc.sql.SQLServerDictionary, which produces invalid
    SQL.
    The query:
    pm.newQuery(Container.class,
    "(select from Entry entry where entries.contains(entry) &&
    entry.containedId != 1).isEmpty()");
    The classes:
    class Contained {
    private int id; //pk
    class Container {
    private int id; //pk
    private Set entries = new HashSet(); //<Entry>
    class Entry {
    private int containerId; //pk
    private int containedId; //pk
    private Container container; //persistent-redundant
    private Contained contained; //persistent-redundant
    The result:
    Incorrect syntax near the keyword 'WHERE'. {prepstmnt 31598780 SELECT
    t0.container_id, t0.lock FROM  WHERE (NOT EXISTS (SELECT DISTINCT
    t2.contained_id, t2.container_id FROM dbo.entry t2 WHERE (t1.contained_id
    = t2.contained_id AND t1.container_id = t2.container_id AND
    t2.contained_id <> ?) AND t0.container_id = t1.container_id))
    [params=(int) 1]} [code=156, state=HY000]
    Patrick Linskey wrote:
    Hi Paul,
    Kodo's correlated subquery support does have some known limitations. Can
    you post a sample JDOQL statement + corresponding SQL statement?
    -Patrick
    Paul Mogren wrote:
    Hi,
    When I execute a JDOQL correlated subquery, the generated SQL is either
    invalid or incorrect. Exactly what happens depends on the exact query, and
    on the target database type, but I believe it all stems from the same
    problem, which has to do with table aliasing.
    If you need further details to reproduce this, please let me know. I'll be
    glad to help in any way I can to get this situation remedied quickly, as I
    am depending on this functionality. I have a test application to
    demonstrate the problem.
    I'm using Kodo 3.3.3 and application identity.
    Paul Mogren
    CommerceHub

  • Anybody can help with this SQL?

    The table is simple, only 2 columns:
    create table personpay(
    id integer primary key,
    pay number(8,2) not null);
    So the original talbe looks like this:
    ID PAY
    1 800
    2 400
    3 1200
    4 500
    5 600
    6 1900
    The requirement is to use one single query(no pl/sql) to show something lile the following, in other words, for each ID, the pay is the sum of all before itself and itself. So the query result looks like this:
    ID PAY
    1 800
    2 1200
    3 2400
    4 2900
    5 3500
    6 5400
    Again, just use one sql. No pl/sql. Anybody can help with this? I really appreciate that.
    thanks,

    Eh, people are so "analytically minded" that can't even notice a simple join?
    Counting Ordered Rows
    Let’s start with a basic counting problem. Suppose we are given a list of integers, for example:
    x
    2
    3
    4
    6
    9
    and want to enumerate all of them sequentially like this:
    x      #
    2      1
    3      2
    4      3
    6      4
    9      5
    Enumerating rows in the increasing order is the same as counting how many rows precede a given row.
    SQL enjoys success unparalleled by any rival query language. Not the last reason for such popularity might be credited to its proximity to English . Let examine the informal idea
    Enumerating rows in increasing order is counting how many rows precede a given row.
    carefully. Perhaps the most important is that we referred to the rows in the source table twice: first, to a given row, second, to a preceding row. Therefore, we need to join our number list with itself (fig 1.1).
    Cartesian Product
    Surprisingly, not many basic SQL tutorials, which are so abundant on the web today, mention Cartesian product. Cartesian product is a join operator with no join condition
    select A.*, B.* from A, B
    Figure 1.1: Cartesian product of the set A = {2,3,4,6,9} by itself. Counting all the elements x that are no greater than y produces the sequence number of y in the set A.
    Carrying over this idea into formal SQL query is straightforward. As it is our first query in this book, let’s do it step by step. The Cartesian product itself is
    select t.x x, tt.x y
    from T t, T tt
    Next, the triangle area below the main diagonal is
    select t.x x, tt.x y
    from T t, T tt
    where tt.x <= t.x
    Finally, we need only one column – t.x – which we group the previous result by and count
    select t.x, count(*) seqNum
    from T t, T tt
    where tt.x <= t.x
    group by t.x
    What if we modify the problem slightly and ask for a list of pairs where each number is coupled with its predecessor?
    x      predecessor
    2      
    3      2
    4      3
    6      4
    9      6
    Let me provide a typical mathematician’s answer, first -- it is remarkable in a certain way. Given that we already know how to number list elements successively, it might be tempted to reduce the current problem to the previous one:
    Enumerate all the numbers in the increasing order and match each sequence number seq# with predecessor seq#-1. Next!
    This attitude is, undoubtedly, the most economical way of thinking, although not necessarily producing the most efficient SQL. Therefore, let’s revisit our original approach, as illustrated on fig 1.2.
    Figure 1.2: Cartesian product of the set A = {2,3,4,6,9} by itself. The predecessor of y is the maximal number in a set of x that are less than y. There is no predecessor for y = 2.
    This translates into the following SQL query
    select t.x, max(tt.x) predecessor
    from T t, T tt
    where tt.x < t.x
    group by t.x
    Both solutions are expressed in standard SQL leveraging join and grouping with aggregation. Alternatively, instead of joining and grouping why don’t we calculate the count or max just in place as a correlated scalar subquery:
    select t.x,
    (select count(*) from T tt where tt.x <= t.x) seq#
    from T t
    group by t.x
    The subquery always returns a single value; this is why it is called scalar. The tt.x <= t.x predicate connects it to the outer query; this is why it is called correlated. Arguably, leveraging correlated scalar subqueries is one the most intuitive techniques to write SQL queries.
    How about counting rows that are not necessarily distinct? This is where our method breaks. It is challenging to distinguish duplicate rows by purely logical means, so that various less “pure” counting methods were devised. They all, however, require extending the SQL syntactically, which was the beginning of slipping along the ever increasing language complexity slope.
    Here is how analytic SQL extension counts rows
    select x, rank() over(order by x) seq# from T; -- first problem
    select x, lag() over(order by x) seq# from T; -- second problem
    Many people suggest that it’s not only more efficient, but more intuitive. The idea that “analytics rocks” can be challenged in many ways. The syntactic clarity has its cost: SQL programmer has to remember (or, at least, lookup) the list of analytic functions. The performance argument is not evident, since non-analytical queries are simpler construction from optimizer perspective. A shorter list of physical execution operators implies fewer query transformation rules, and less dramatic combinatorial explosion of the optimizer search space.
    It might even be argued that the syntax could be better. The partition by and order by clauses have similar functionality to the group by and order by clauses in the main query block. Yet one name was reused, and the other had been chosen to have a new name. Unlike other scalar expressions, which can be placed anywhere in SQL query where scalar values are accepted, the analytics clause lives in the scope of the select clause only. I have never been able to suppress an impression that analytic extension could be designed in more natural way.

  • A little help with some code and theory

    Hey guys, I'm kinda new to this whole programming thing so
    bare with me please. I have a little problem that I just can't seem
    to get my hear around. I have a working on an inventory management
    page. It keeps track of what software is installed where and how
    many copies we have, and serials and such. Well, this page was set
    up long before I started here and it wasn't too friendly. So I'm
    editing it and adding some new features. Well one of them is
    displaying the “available installs count” with the
    search results.
    Here is my problem. The way the system keeps track of what
    software is installed where is by adding taking the appId from the
    table it with all the app's information on it (the sn, name, max
    installs, etc) and creating a new entry in another table that
    indexs that appId and the workstation it is installed on. (along
    with an Install id) well the only way to check to see how many
    copies is installed is to get the record count for a particular
    appId.
    This is where I run into trouble. When a user searches for
    apps the results page displays the apps that still have installs
    available. So where the record count of getInstallCount for
    whatever appId is less than the max installs. Well what is the best
    way to do this since to get the record count of getInstallCount I
    need an appID and the appID isn't found until I do the search
    query. What I'm using now is this but it always returns 0 for
    current Install Count. See the attached Snippet of code:
    I see what it is doing. It is getting the isntallCount for
    all appIds, but I dont' know how to narrow it down without doing
    the search first, and I don't know how to do the search with out
    getting the install count first. Any ideas or advice would rock
    guys. Like I said I'm kinda new to this programming stuff but it is
    awesome! Thanks!
    Kevin

    The w and wi are just table aliases that make it easier to
    referrence those tables by column when joining multiple tables
    (avoids having to prefix column names with the entire table name,
    etc.) When doing a self-join (joining a table to itself), the use
    of table aliases is
    required, otherwise you would have no other way to tell
    which column belonged to which "instance" of table.
    The query itself is pretty simple. It is just using a
    correlated subquery to select only those instances of
    workstationApps where the count of appID instances in
    workstationAppIndex is less than the amount specified in the
    maxConcurrentInstalls for the same appID.
    Also, I do see what you are doing with #form.searchType#, I
    was just making sure that it was what you really intended to do.
    Phil

  • Top n Analysis using correlated subquery

    Please explain this query. It is doing top n analysis using correlated subquery. I need explaination of execution of this query.
    Select distinct a.sal
    From emp a
    where 1=(select count ( distinct b.sal) from emp b
         where a.sal <=b.sal)
    Thanks in advance

    Try breaking the query down and rewriting it in order to follow the logic;
    SQL> --
    SQL> -- Start by getting each salary from emp along with a count of all salaries in emp
    SQL> --
    SQL> select   a.sal,
            (select count (distinct b.sal) from scott.emp b ) count_sal
    from scott.emp a
    order by 1 desc
           SAL  COUNT_SAL
          5000         12
          3000         12
          3000         12
          2975         12
          2850         12
          2450         12
          1600         12
          1500         12
          1300         12
          1250         12
          1250         12
          1100         12
           950         12
           800         12
    14 rows selected.
    SQL> --
    SQL> --Add a condition to the count for only salaries below or equal to the current salarySQL> --
    SQL> select   a.sal,
            (select count (distinct b.sal) from scott.emp b where a.sal <=b.sal) rank_sal
    from scott.emp a
    order by 1 desc
           SAL   RANK_SAL
          5000          1
          3000          2
          3000          2
          2975          3
          2850          4
          2450          5
          1600          6
          1500          7
          1300          8
          1250          9
          1250          9
          1100         10
           950         11
           800         12
    14 rows selected.
    SQL> --
    SQL> -- Add a condition to only pick the nth highest salary
    SQL> --
    SQL> select    a.sal,
             (select count (distinct b.sal) from scott.emp b where a.sal <=b.sal) rank_sal
    from scott.emp a
    where (select count (distinct b.sal) from scott.emp b where a.sal <=b.sal) = 4
           SAL   RANK_SAL
          2850          4
    1 row selected.Hope this helps.

  • SQL Bug in "Minus" in correlated subquery presence of index (11.2.0.1.0)

    SQL Bug in "Minus" in correlated subquery presence of index
    (Oracle Database 11g Release 11.2.0.1.0)
    Below, there is a small example that shows the bug. Further below,
    there are some more comments.
    drop table Country;
    create table Country
    (code VARCHAR2(4) constraint countrykey PRIMARY KEY,
    name VARCHAR2(35));
    -- if the key constraint is not given, the bug does not occur
    drop table City;
    create table City
    (name VARCHAR2(35),
    country VARCHAR2(4),
    population number);
    drop table Locatedon;
    create table Locatedon
    (city VARCHAR2(35),
    country VARCHAR2(4),
    island VARCHAR2(35));
    insert into country values('E','Spain');
    insert into country values('F','France');
    insert into country values('S','Sweden');
    insert into country values('GB','Sweden');
    insert into city values('Ajaccio','F',53500);
    insert into city values('Paris','F',2152423);
    insert into city values('Palma','E',322008);
    insert into city values('Madrid','E',3041101);
    insert into city values('Stockholm','S',711119);
    insert into city values('London','GB',6967500);
    insert into locatedon values('Ajaccio','F','Corse');
    insert into locatedon values('Palma','E','Mallorca');
    insert into locatedon values('London','GB','Great Britain');
    -- all countries that have a city that is not located on
    -- some island: should be E, F, S.
    Select c.name
    From country c
    Where exists
    ((Select name
    From city
    Where city.country=c.code)
    minus
    (Select city
    From locatedon
    Where locatedon.country=c.code)
    -- wrong answer: only Sweden; Spain and France not in the answer!
    select distinct country from
    ((Select name, country
    From city)
    minus
    (Select city, country
    From locatedon)
    -- correct answer: E, F, S
    Comments:
    The bug has been found by students in our SQL course.
    Using a larger database from that course, the bug can be reproduced
    (same queries as above) at
    http://www.semwebtech.org/sqlfrontend/
    (wrong: 142 answers, correct: 154 answers)
    During reducing it to a simple sample, there were some interesting
    observations: trying with smaller and simpler tables (without the keys)
    and synthetic data, the bug did not occur immediately. When
    restating the query after about one day, the bug occurred. Obviously,
    Oracle creates some index on its own in course of its internal
    optimization that (or more exactly, its usage) exhibits the bug. The
    query plan (showed in SQL Developer) was the same before and after.
    Wolfgang

    There's a typo in the test data - GB should presumably not be in Sweden. However....
    the bug did not occur immediatelyIt's possible. But what would have almost certainly happened is that the execution plan DID change at some point. There are various reasons why it might not be immediate.
    Obviously, Oracle creates some index on its own in course of its internal optimizationFar from obvious, what are you on about?
    The query plan was the same before and afterBet you it wasn't.
    A clear illustration of the issue and indication that it must be a bug is below.
    Simply by hinting a different access method, we can change the result. Therefore, bug.
    See [url http://support.oracle.com]Oracle Support and search for "wrong results".
    Please raise with Oracle Support to get confirmation of bug.
    There have been so many wrong results bugs recently, it's getting ridiculous.
    It's a real issue, IMHO.
    If you can't trust the DB to get your data right....
    Note that the query plan is very much NOT the same and it is the difference in query plan which s that is the root cause of the bug.
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - Production
    SQL> SELECT c.name
      2  FROM   country1 c
      3  WHERE  exists ((SELECT name
      4                  FROM   city1
      5                  WHERE  city1.country=c.code)
      6                  MINUS
      7                 (SELECT city
      8                  FROM   locatedon1
      9                  WHERE  locatedon1.country=c.code));
    NAME
    Sweden
    SQL> SELECT /*+ full(c) */
      2         c.name
      3  FROM   country1 c
      4  WHERE  exists ((SELECT name
      5                  FROM   city1
      6                  WHERE  city1.country=c.code)
      7                  MINUS
      8                 (SELECT city
      9                  FROM   locatedon1
    10                  WHERE  locatedon1.country=c.code));
    NAME
    Spain
    France
    Sweden
    SQL> explain plan for
      2  SELECT c.name
      3  FROM   country1 c
      4  WHERE  exists ((SELECT name
      5                  FROM   city1
      6                  WHERE  city1.country=c.code)
      7                  MINUS
      8                 (SELECT city
      9                  FROM   locatedon1
    10                  WHERE  locatedon1.country=c.code));
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 156929629
    | Id  | Operation                    | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT             |            |     1 |    27 |    12  (25)| 00:00:01 |
    |   1 |  NESTED LOOPS                |            |       |       |            |          |
    |   2 |   NESTED LOOPS               |            |     1 |    27 |    12  (25)| 00:00:01 |
    |   3 |    VIEW                      | VW_SQ_1    |     6 |    24 |    10  (20)| 00:00:01 |
    |   4 |     MINUS                    |            |       |       |            |          |
    |   5 |      SORT UNIQUE             |            |     6 |   138 |            |          |
    |   6 |       TABLE ACCESS FULL      | CITY1      |     6 |   138 |     4   (0)| 00:00:01 |
    |   7 |      SORT UNIQUE             |            |     3 |    69 |            |          |
    |   8 |       TABLE ACCESS FULL      | LOCATEDON1 |     3 |    69 |     4   (0)| 00:00:01 |
    |*  9 |    INDEX UNIQUE SCAN         | COUNTRYKEY |     1 |       |     0   (0)| 00:00:01 |
    |  10 |   TABLE ACCESS BY INDEX ROWID| COUNTRY1   |     1 |    23 |     1   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       9 - access("VW_COL_1"="C"."CODE")
    Note
       - dynamic sampling used for this statement (level=4)
    26 rows selected.
    SQL> explain plan for
      2  SELECT /*+ full(c) */
      3         c.name
      4  FROM   country1 c
      5  WHERE  exists ((SELECT name
      6                  FROM   city1
      7                  WHERE  city1.country=c.code)
      8                  MINUS
      9                 (SELECT city
    10                  FROM   locatedon1
    11                  WHERE  locatedon1.country=c.code));
    Explained.
    SQL> select * from table(dbms_xplan.display);
    PLAN_TABLE_OUTPUT
    Plan hash value: 1378726376
    | Id  | Operation            | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT     |            |     1 |    23 |    14  (15)| 00:00:01 |
    |*  1 |  FILTER              |            |       |       |            |          |
    |   2 |   TABLE ACCESS FULL  | COUNTRY1   |     4 |    92 |     4   (0)| 00:00:01 |
    |   3 |   MINUS              |            |       |       |            |          |
    |   4 |    SORT UNIQUE       |            |     1 |    23 |     5  (20)| 00:00:01 |
    |*  5 |     TABLE ACCESS FULL| CITY1      |     1 |    23 |     4   (0)| 00:00:01 |
    |   6 |    SORT UNIQUE       |            |     1 |    23 |     5  (20)| 00:00:01 |
    |*  7 |     TABLE ACCESS FULL| LOCATEDON1 |     1 |    23 |     4   (0)| 00:00:01 |
    Predicate Information (identified by operation id):
       1 - filter( EXISTS ( (SELECT "NAME" FROM "CITY1" "CITY1" WHERE
                  "CITY1"."COUNTRY"=:B1)MINUS (SELECT "CITY" FROM "LOCATEDON1" "LOCATEDON1"
                  WHERE "LOCATEDON1"."COUNTRY"=:B2)))
       5 - filter("CITY1"."COUNTRY"=:B1)
       7 - filter("LOCATEDON1"."COUNTRY"=:B1)
    Note
       - dynamic sampling used for this statement (level=4)
    27 rows selected.Just to show that it's related to query transformation:
    SQL> SELECT /*+ 
      2             no_query_transformation
      3         */
      4         c.name
      5  FROM   country1 c
      6  WHERE  exists ((SELECT name
      7                  FROM   city1
      8                  WHERE  city1.country=c.code)
      9                  MINUS
    10                 (SELECT city
    11                  FROM   locatedon1
    12                  WHERE  locatedon1.country=c.code));
    NAME
    Spain
    France
    Sweden
    SQL> Edited by: Dom Brooks on Jun 30, 2011 2:50 PM

  • Performance of using a Select For Update vs a correlated subquery

    I was wondering wether or not it is more effecient to use the
    Select ... For Update (with a cursor etc.) versus a correlated
    subquery.
    I can accomplish the same thing with either however performance
    at our site is an issue.

    Use select for update cursor as that is faster as it updates
    based on the rowid. One thing to keep in mind is that rowid is
    session specific and the rows to be updated get locked so that
    nobody else can update them till the lock is released. I have
    had very good performance results with these cursors.
    Good luck !
    Sudha

  • Help with count and sum query

    Hi I am using oracle 10g. Trying to aggregate duplicate count records. I have so far:
    SELECT 'ST' LEDGER,
    CASE
    WHEN c.Category = 'E' THEN 'Headcount Exempt'
    ELSE 'Headcount Non-Exempt'
    END
    ACCOUNTS,
    CASE WHEN a.COMPANY = 'ZEE' THEN 'OH' ELSE 'NA' END MARKET,
    'MARCH_12' AS PERIOD,
    COUNT (a.empl_id) head_count
    FROM essbase.employee_pubinfo a
    LEFT OUTER JOIN MMS_DIST_COPY b
    ON a.cost_ctr = TRIM (b.bu)
    INNER JOIN MMS_GL_PAY_GROUPS c
    ON a.pay_group = c.group_code
    WHERE a.employee_status IN ('A', 'L', 'P', 'S')
    AND FISCAL_YEAR = '2012'
    AND FISCAL_MONTH = 'MARCH'
    GROUP BY a.company,
    b.district,
    a.cost_ctr,
    c.category,
    a.fiscal_month,
    a.fiscal_year;
    which gives me same rows with different head_counts. I am trying to combine the same rows as a total (one record). Do I use a subquery?

    Hi,
    Whenever you have a problem, please post a little sample data (CREATE TABLE and INSERT statements, relevant columns only) from all tables involved.
    Also post the results you want from that data, and an explanation of how you get those results from that data, with specific examples.
    user610131 wrote:
    ... which gives me same rows with different head_counts.If they have different head_counts, then the rows are not the same.
    I am trying to combine the same rows as a total (one record). Do I use a subquery?Maybe. It's more likely that you need a different GROUP BY clause, since the GROUP BY clause determines how many rows of output there will be. I'll be able to say more after you post the sample data, results, and explanation.
    You may want both a sub-query and a different GROUP BY clause. For example:
    WITH    got_group_by_columns     AS
         SELECT  a.empl_id
         ,     CASE
                        WHEN  c.category = 'E'
                  THEN  'Headcount Exempt'
                        ELSE  'Headcount Non-Exempt'
                END          AS accounts
         ,       CASE
                        WHEN a.company = 'ZEE'
                        THEN 'OH'
                        ELSE 'NA'
                END          AS market
         FROM              essbase.employee_pubinfo a
         LEFT OUTER JOIN  mms_dist_copy             b  ON   a.cost_ctr     = TRIM (b.bu)
         INNER JOIN       mms_gl_pay_groups        c  ON   a.pay_group      = c.group_code
         WHERE     a.employee_status     IN ('A', 'L', 'P', 'S')
         AND        fiscal_year           = '2012'
         AND        fiscal_month          = 'MARCH'
    SELECT    'ST'               AS ledger
    ,       accounts
    ,       market
    ,       'MARCH_12'          AS period
    ,       COUNT (empl_id)       AS head_count
    FROM       got_group_by_columns
    GROUP BY  accounts
    ,            market
    ;But that's just a wild guess.
    You said you wanted "Help with count and sum". I see the COUNT, but what do you want with SUM? No doubt this will be clearer after you post the sample data and results.
    Edited by: Frank Kulash on Apr 4, 2012 5:31 PM

  • Help with if statement in cursor and for loop to get output

    I have the following cursor and and want to use if else statement to get the output. The cursor is working fine. What i need help with is how to use and if else statement to only get the folderrsn that have not been updated in the last 30 days. If you look at the talbe below my select statement is showing folderrs 291631 was updated only 4 days ago and folderrsn 322160 was also updated 4 days ago.
    I do not want these two to appear in my result set. So i need to use if else so that my result only shows all folderrsn that havenot been updated in the last 30 days.
    Here is my cursor:
    /*Cursor for Email procedure. It is working Shows userid and the string
    You need to update these folders*/
    DECLARE
    a_user varchar2(200) := null;
    v_assigneduser varchar2(20);
    v_folderrsn varchar2(200);
    v_emailaddress varchar2(60);
    v_subject varchar2(200);
    Cursor c IS
    SELECT assigneduser, vu.emailaddress, f.folderrsn, trunc(f.indate) AS "IN DATE",
    MAX (trunc(fpa.attemptdate)) AS "LAST UPDATE",
    trunc(sysdate) - MAX (trunc(fpa.attemptdate)) AS "DAYS PAST"
    --MAX (TRUNC (fpa.attemptdate)) - TRUNC (f.indate) AS "NUMBER OF DAYS"
    FROM folder f, folderprocess fp, validuser vu, folderprocessattempt fpa
    WHERE f.foldertype = 'HJ'
    AND f.statuscode NOT IN (20, 40)
    AND f.folderrsn = fp.folderrsn
    AND fp.processrsn = fpa.processrsn
    AND vu.userid = fp.assigneduser
    AND vu.statuscode = 1
    GROUP BY assigneduser, vu.emailaddress, f.folderrsn, f.indate
    ORDER BY fp.assigneduser;
    BEGIN
    FOR c1 IN c LOOP
    IF (c1.assigneduser = v_assigneduser) THEN
    dbms_output.put_line(' ' || c1.folderrsn);
    else
    dbms_output.put(c1.assigneduser ||': ' || 'Overdue Folders:You need to update these folders: Folderrsn: '||c1.folderrsn);
    END IF;
    a_user := c1.assigneduser;
    v_assigneduser := c1.assigneduser;
    v_folderrsn := c1.folderrsn;
    v_emailaddress := c1.emailaddress;
    v_subject := 'Subject: Project for';
    END LOOP;
    END;
    The reason I have included the folowing table is that I want you to see the output from the select statement. that way you can help me do the if statement in the above cursor so that the result will look like this:
    emailaddress
    Subject: 'Project for ' || V_email || 'not updated in the last 30 days'
    v_folderrsn
    v_folderrsn
    etc
    [email protected]......
    Subject: 'Project for: ' Jim...'not updated in the last 30 days'
    284087
    292709
    [email protected].....
    Subject: 'Project for: ' Kim...'not updated in the last 30 days'
    185083
    190121
    190132
    190133
    190159
    190237
    284109
    286647
    294631
    322922
    [email protected]....
    Subject: 'Project for: Joe...'not updated in the last 30 days'
    183332
    183336
    [email protected]......
    Subject: 'Project for: Sam...'not updated in the last 30 days'
    183876
    183877
    183879
    183880
    183881
    183882
    183883
    183884
    183886
    183887
    183888
    This table is to shwo you the select statement output. I want to eliminnate the two days that that are less than 30 days since the last update in the last column.
    Assigneduser....Email.........Folderrsn...........indate.............maxattemptdate...days past since last update
    JIM.........      jim@ aol.com.... 284087.............     9/28/2006.......10/5/2006...........690
    JIM.........      jim@ aol.com.... 292709.............     3/20/2007.......3/28/2007............516
    KIM.........      kim@ aol.com.... 185083.............     8/31/2004.......2/9/2006.............     928
    KIM...........kim@ aol.com.... 190121.............     2/9/2006.........2/9/2006.............928
    KIM...........kim@ aol.com.... 190132.............     2/9/2006.........2/9/2006.............928
    KIM...........kim@ aol.com.... 190133.............     2/9/2006.........2/9/2006.............928
    KIM...........kim@ aol.com.... 190159.............     2/13/2006.......2/14/2006............923
    KIM...........kim@ aol.com.... 190237.............     2/23/2006.......2/23/2006............914
    KIM...........kim@ aol.com.... 284109.............     9/28/2006.......9/28/2006............697
    KIM...........kim@ aol.com.... 286647.............     11/7/2006.......12/5/2006............629
    KIM...........kim@ aol.com.... 294631.............     4/2/2007.........3/4/2008.............174
    KIM...........kim@ aol.com.... 322922.............     7/29/2008.......7/29/2008............27
    JOE...........joe@ aol.com.... 183332.............     1/28/2004.......4/23/2004............1585
    JOE...........joe@ aol.com.... 183336.............     1/28/2004.......3/9/2004.............1630
    SAM...........sam@ aol.com....183876.............3/5/2004.........3/8/2004.............1631
    SAM...........sam@ aol.com....183877.............3/5/2004.........3/8/2004.............1631
    SAM...........sam@ aol.com....183879.............3/5/2004.........3/8/2004.............1631
    SAM...........sam@ aol.com....183880.............3/5/2004.........3/8/2004.............1631
    SAM...........sam@ aol.com....183881.............3/5/2004.........3/8/2004.............1631
    SAM...........sam@ aol.com....183882.............3/5/2004.........3/8/2004.............1631
    SAM...........sam@ aol.com....183883.............3/5/2004.........3/8/2004.............1631
    SAM...........sam@ aol.com....183884.............3/5/2004.........3/8/2004............     1631
    SAM...........sam@ aol.com....183886.............3/5/2004.........3/8/2004............     1631
    SAM...........sam@ aol.com....183887.............3/5/2004.........3/8/2004............     1631
    SAM...........sam@ aol.com....183888.............3/5/2004.........3/8/2004............     1631
    PAT...........pat@ aol.com.....291630.............2/23/2007.......7/8/2008............     48
    PAT...........pat@ aol.com.....313990.............2/27/2008.......7/28/2008............28
    NED...........ned@ aol.com.....190681.............4/4/2006........8/10/2006............746
    NED...........ned@ aol.com......95467.............6/14/2006.......11/6/2006............658
    NED...........ned@ aol.com......286688.............11/8/2006.......10/3/2007............327
    NED...........ned@ aol.com.....291631.............2/23/2007.......8/21/2008............4
    NED...........ned@ aol.com.....292111.............3/7/2007.........2/26/2008............181
    NED...........ned@ aol.com.....292410.............3/15/2007.......7/22/2008............34
    NED...........ned@ aol.com.....299410.............6/27/2007.......2/27/2008............180
    NED...........ned@ aol.com.....303790.............9/19/2007.......9/19/2007............341
    NED...........ned@ aol.com.....304268.............9/24/2007.......3/3/2008............     175
    NED...........ned@ aol.com.....308228.............12/6/2007.......12/6/2007............263
    NED...........ned@ aol.com.....316689.............3/19/2008.......3/19/2008............159
    NED...........ned@ aol.com.....316789.............3/20/2008.......3/20/2008............158
    NED...........ned@ aol.com.....317528.............3/25/2008.......3/25/2008............153
    NED...........ned@ aol.com.....321476.............6/4/2008.........6/17/2008............69
    NED...........ned@ aol.com.....322160.............7/3/2008.........8/21/2008............4
    MOE...........moe@ aol.com.....184169.............4/5/2004.......12/5/2006............629
    [email protected]/27/2004.......3/8/2004............1631
    How do I incorporate a if else statement in the above cursor so the two days less than 30 days since last update are not returned. I do not want to send email if the project have been updated within the last 30 days.
    Edited by: user4653174 on Aug 25, 2008 2:40 PM

    analytical functions: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96540/functions2a.htm#81409
    CASE
    http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96624/02_funds.htm#36899
    http://download.oracle.com/docs/cd/B10501_01/appdev.920/a96624/04_struc.htm#5997
    Incorporating either of these into your query should assist you in returning the desired results.

  • WRT310N: Help with DMZ/settings (firmware 1.0.09) for wired connection

    Hello. I have a WRT310N and have been having a somewhat difficult time with my xbox 360's connection. I have forwarded all the necessary ports (53, 80, 88, 3074) for it to run, and tried changing MTU and what-not.
    I don't know if I have DMZ setup incorrectly, or if it's my settings.
    Setup as follows:
    PCX2200 modem connected via ethernet to WRT310N. 
    The WRT310N has into ethernet port 1 a WAP54G, and then upstairs (so that my Mother's computer can get a strong signal) I have another WAP54G that I believe receives its signal from the downstairs 54G. 
    In the back of the WRT310N, I have my computer connected via ethernet port 3, and my Xbox 360 connected via ethernet port 4.
    Now, I first figured I just have so many connections tied to the router and that is the reason for being so slow. However, when I unplug all the other ethernet cords and nothing is connected wirelessly, except for my Xbox connected to ethernet port 4, it is still poor. Also, with everything connected (WAP54G and other devices wirelessly) I get on my PC and run a speedtest.  For the sake of advice, my speedtests I am running on my PC are (after 5 tests) averagely 8.5 Mbps download, and 1.00 Mbps upload, with a ping of  82ms.
    Here is an image of the results:
    http://www.speedtest.net][IMG]http://www.speedtest.net/result/721106714.png
    Let me add a little more detail of my (192.168.1.1) settings for WRT310N.
    For starters, my Father's IT guy at his workplace set up this WRT310N and WAP54G's. So some of these settings may be his doing. I just don't know which.
    "Setup" as Auto-configurations DHCP. I've added my Xbox's IP address to the DHCP reservation the IP of 192.168.1.104. This has (from what I've noticed) stayed the same for days.
    MTU: Auto, which stays at 1500 when I check under status.
    Advanced Routing: NAT routing enabled, Dynamic Routing disabled. 
    Security: Disabled SPI firewall, UNchecked these: Filter Anonymous Internet Requests, Multicast, and Internet NAT redirection.
    VPN passthrough: All 3 options are enabled (IPSec, PPTP, L2TP)
    Access Restrictions: None.
    Applications and Gaming: Single port forwarding has no entries. Port Range Forwarding I have the ports 53 UDP/TCP, 88 UDP, 3074 UDP/TCP, and 80 TCP forwarded to IP 192.168.1.104 enabled. (192.168.1.104 is the IP for my xbox connected via ethernet wired that is in DHCP reserved list)
    Port Range Triggering: It does not allow me to change anything in this page.
    DMZ: I have it Enabled. This is where I am a bit confused. It says "Source IP Address" and it has me select either "Any IP address" or to put entries to the XXX.XXX.XXX.XXX to XXX fields. I have selected use any IP address. Then the source IP area, it says "Destination:"  I can do either "IP address: 192.168.1.XXX" or "MAC address:" Also, under MAC Address, it says DHCP Client Table and I went there and saw my Xbox under the DHCP client list (It shows up only when the Xbox is on) and selected it.  
    Under QoS: WMM Enabled, No acknowledgement disabled.
    Internet Access Priority: Enabled. Upstream Bandwith I set it to Manual and put 6000 Kbps. I had it set on Auto before, but I changed it. I have no idea what to put there so I just put a higher number. 
    Then I added for Internet Access Priority a Medium Priority for Ethernet Port 4 (the port my xbox is plugged into).
    Administration: Management: Web utility access: I have checked HTTP, unchecked HTTPS.
    Web utility access via Wireless: Enabled. Remote Access: Disabled.
    UPnp: Enabled.
    Allow Users to Configure: Enabled.
    Allow users to Disable Internet Access: Enabled.
    Under Diagnostics, when I try and Ping test 192.168.1.104 (xbox when on and connected to LIVE), I get:
    PING 192.168.1.104 (192.168.1.104): 24 data bytes
    Request timed out.
    Request timed out.
    Request timed out.
    Request timed out.
    Request timed out.
    --- 192.168.1.104 data statistics ---
    5 Packets transmitted, 0 Packets received, 100% Packet loss
    Also, when I do Traceroute Test for my Xbox's IP, I just keep getting: 
    traceroute to 192.168.1.104 (192.168.1.104), 30 hops max, 40 byte packets
    1 * * * 192.168.1.1 Request timed out.
    2 * * * 192.168.1.1 Request timed out.
     As for the Wireless Settings, it is all on the default settings with Wi-Fi Protected setup Enabled.
    To add, I have tried connecting my modem directly to the Xbox and my connection is much improved. I have no difficulty getting the NAT open, for it seems my settings are working for that. Any help with these settings would be VERY much appreciated. 
    Message Edited by CroftBond on 02-18-2010 01:09 PM

    I own 2 of these routers (one is a spare) with the latest firmware and I have been having trouble with them for over a year.  In my case the connection speed goes to a crawl and the only way to get it back is to disable the SPI firewall.  Rebooting helps for a few minutes, but the problem returns.  All of the other fixes recommended on these forums did not help.  I found out the hard way that disabling the SPI Firewall also closes all open ports ignoring your port forwarding settings.  If you have SPI Firewall disabled, you will never be able to ping your IP from an external address.  Turn your SPI Firewall back on and test your Ping. 
    John

Maybe you are looking for

  • Is the Mini for me?

    Currently I have a decent PC in my room, and my family PC is good for gaming, which is in the other room. Right now, im interested in paying monthly for a computer for $1,500 at most. The cheapest Mac ive seen is the Mac Mini with the intel core duo.

  • How to separate UI from code

    I usually create front panels that are an interface to read/write parameters via serial modbus. So the main front panel is composed as follows: 1) on the top, some controls to set COM, modbus address, load/save button, read/write button 2) a textbox

  • Error in ALPHA conversion while loading hierarchies

    Hi, I got the following error while loading hierarchies: <i>Error in ALPHA conversion in NodeId00000088 for InfoObject 0PROFIT_CTR Message no. RH224 Diagnosis The technical node name  for node id 00000088 is not consistent for conversion exit ALPHA,

  • It had nothing to do with our engineer! Honestly.....

    Before Christmas BT came and dug up the path outside our house and since then our phone line had developed a slight crackle. Slowly the crackle got louder and my broadband speed reduced to the level where it was nearly useless. I made the great mista

  • Barcode reader in forms

    is there any way to add functionality of bar code reader in forms