Query SQL Logs to get time

Hello folks, I am connected to a SQL database using Microsoft SQL Server Management Studio. I am querying ISA logs for traffic by running following query;
select distinct (clientusername)
from webproxylog
where logtime > '2014-03-12'
Now this query returns list of servers accessing the ISA boxes and I can play with dates. What I want is the logs to also show me time, is that possible? Is there something like where logtime > '2014-03-12' AND date, something like that, I am not sure.
I am new to SQL so any help will be highly appreciated.

Are you looking for the below?
select clientusername,Max(logtime) --fetches the max log time for clientusername
from webproxylog
where logtime > '2014-03-12'
Group by Clientusername

Similar Messages

  • Query SQL Log from BI Publisher

    Hi, where can I watch the Query DB SQL runned in a BI Publisher Report?
    Here I can see errors e time only: ..domains\bifoundation_domain\servers\bi_server1\logs\bipublisher
    Thank you very much, bye. Riccardo

    Ok, maybe I've found this http://download.oracle.com/docs/cd/E14571_01/bi.1111/e13880/T526682T542464.htm#scg_vw_lf

  • SQL 2012 SP1 - How to determine a query that causes Error 8623 in SQL Log: The query processor ran out of internal resources and could not produce a query plan. This is a rare event...

    We are getting multiple 8623 Errors in SQL Log while running Vendor's software.
    How can you catch which Query causes the error?
    I tried to catch it using SQL Profiler Trace but it doesn't show which Query/Sp is the one causing an error. 
    I also tried to use Extended Event session to catch it, but it doesn't create any output either.
    Error:
    The query processor ran out of internal resources and could not produce a query plan. This is a rare event and only expected for extremely complex queries or queries that
    reference a very large number of tables or partitions. Please simplify the query. If you believe you have received this message in error, contact Customer Support Services for more information.
    Extended Event Session that I used;
    CREATE EVENT SESSION
        overly_complex_queries
    ON SERVER
    ADD EVENT sqlserver.error_reported
        ACTION (sqlserver.sql_text, sqlserver.tsql_stack, sqlserver.database_id, sqlserver.username)
        WHERE ([severity] = 16
    AND [error_number] = 8623)
    ADD TARGET package0.asynchronous_file_target
    (SET filename = 'E:\SQLServer2012\MSSQL11.MSSQLSERVER\MSSQL\Log\XE\overly_complex_queries.xel' ,
        metadatafile = 'E:\SQLServer2012\MSSQL11.MSSQLSERVER\MSSQL\Log\XE\overly_complex_queries.xem',
        max_file_size = 10,
        max_rollover_files = 5)
    WITH (MAX_DISPATCH_LATENCY = 5SECONDS)
    GO
    -- Start the session
    ALTER EVENT SESSION overly_complex_queries
        ON SERVER STATE = START
    GO
    It creates only .xel file, but not .xem
    Any help/advice is greatly appreciated

    Hi VK_DBA,
    According to your error message, about which query statement may fail with error message 8623, as other post, you can use trace flag 4102 & 4118 for overcoming this error. Another way is looking for queries with very long IN lists, a large number of
    UNIONs, or a large number of nested sub-queries. These are the most common causes of this particular error message.
    The error 8623 occurs when attempting to select records through a query with a large number of entries in the "IN" clause (> 10,000). For avoiding this error, I suggest that you could apply the latest Cumulative Updates media for SQL Server 2012 Service
    Pack 1, then simplify the query. You may try divide and conquer approach to get part of the query working (as temp table) and then add extra joins / conditions. Or You could try to run the query using the hint option (force order), option (hash join), option
    (merge join) with a plan guide.
    For more information about error 8623, you can review the following article.
    http://blogs.technet.com/b/mdegre/archive/2012/03/13/8623-the-query-processor-ran-out-of-internal-resources-and-could-not-produce-a-query-plan.aspx
    Regards,
    Sofiya Li
    Sofiya Li
    TechNet Community Support

  • Query SQL code gets deleted after export to Excel. "Query must have at least one destination field"

    Hi all,
    I'm getting really frustrated by this Access error. It happens when I export the result of a query through an Access macro to Excel, the first time it runs well but the next time, there is a chance that the query won't run and the error "Query
    must have at least one destination field" will be displayed. After that, I try to check the query SQL code and discover the code has vanished. I'm using simple Select query without joins, only "where", "group by" and "order by"
    statements.
    Thank you in advance for your help,
    Jesus 
    Edit:
    One of these queries are like the following (all of them are of this type):
    SELECT Field1, field2, field3, field4, field5, Sum(Field6) AS SumOfField6, Sum(Field7) AS SumOfField7
    FROM Table1
    WHERE Field6 is not null
    GROUP BY Field1, field2, field3, field4, field5
    Order By Sum(Field6) desc

    Hi Peter, 
    Thank you for your response, I updated the original question with one of the codes.
    Thanks,
    Jesus

  • Getting Time-Out at Select Query

    Dear All,
    I am getting Time-Out at particular Select Query.
    This error i am getting since last 3 days, before it was working fine. And there is no any change made to program in last 3 days.
    select  a~qmnum
             matnr
             kunum
             iwerk
             spart
    into    corresponding fields of iviqmel
    from    ( qmih as a
               inner join qmel as b on b~qmnum = a~qmnum )
    for all entries in i_tab
    where qmart in ('FS', 'ZP')
           and iwerk   = i_tab-iwerk
           and matnr   = i_tab-matnr
           and kunum   = i_tab-kunum
           and spart   = i_tab-spart.
       append iviqmel.
    endselect.
    In Program table QMEL also using the proper Index.
    Can you please give me any suggestion that how to come out from this issue.
    Regards,
    Dharmesh

    Hi Rob,
    I run the SQL Trace also, Query is using correct Index.
    and I had also chenge the select query.
    if not i_tab[] is initial.
        select a~matnr a~kunum a~spart
               b~qmnum b~iwerk
               into corresponding fields of table iviqmel
               from qmel as a inner join
                    qmih as b
                    on a~qmnum = b~qmnum
               for all entries in i_tab
               where a~qmart in ('FS', 'ZP')
                 and a~matnr   = i_tab-matnr
                 and a~kunum   = i_tab-kunum
                 and a~spart   = i_tab-spart
                 and b~iwerk   = i_tab-iwerk.
    endif.
    even its geving time out error.
    Regards,
    Dharmesh

  • My mac froze in an application so I shut it down by powering off with button, now when I try to turn it on I have a grey screen with Apple loge and the timer swirling but it doesn't get past this, please help!

    I Shut down my Mac by holding in the power button after my iMac froze and now when I try to turn it back it on all I get is the grey screen with Apple loge and the timer and doesn't get any further.  I have tried the diagnostic test but nothing was found.

    Take each of these steps that you haven't already tried. Stop when the problem is resolved.
    To restart an unresponsive computer, press and hold the power button for a few seconds until the power shuts off, then release, wait a few more seconds, and press it again briefly.
    Step 1
    The first step in dealing with a startup failure is to secure the data. If you want to preserve the contents of the startup drive, and you don't already have at least one current backup, you must try to back up now, before you do anything else. It may or may not be possible. If you don't care about the data that has changed since the last backup, you can skip this step.
    There are several ways to back up a Mac that is unable to start. You need an external hard drive to hold the backup data.
    a. Start up from the Recovery partition, or from a local Time Machine backup volume (option key at startup.) When the OS X Utilities screen appears, launch Disk Utility and follow the instructions in this support article, under “Instructions for backing up to an external hard disk via Disk Utility.” The article refers to starting up from a DVD, but the procedure in Recovery mode is the same. You don't need a DVD if you're running OS X 10.7 or later.
    b. If Step 1a fails because of disk errors, and no other Mac is available, then you may be able to salvage some of your files by copying them in the Finder. If you already have an external drive with OS X installed, start up from it. Otherwise, if you have Internet access, follow the instructions on this page to prepare the external drive and install OS X on it. You'll use the Recovery installer, rather than downloading it from the App Store.
    c. If you have access to a working Mac, and both it and the non-working Mac have FireWire or Thunderbolt ports, start the non-working Mac in target disk mode. Use the working Mac to copy the data to another drive. This technique won't work with USB, Ethernet, Wi-Fi, or Bluetooth.
    d. If the internal drive of the non-working Mac is user-replaceable, remove it and mount it in an external enclosure or drive dock. Use another Mac to copy the data.
    Step 2
    If the startup process stops at a blank gray screen with no Apple logo or spinning "daisy wheel," then the startup volume may be full. If you had previously seen warnings of low disk space, this is almost certainly the case. You might be able to start up in safe mode even though you can't start up normally. Otherwise, start up from an external drive, or else use the technique in Step 1b, 1c, or 1d to mount the internal drive and delete some files. According to Apple documentation, you need at least 9 GB of available space on the startup volume (as shown in the Finder Info window) for normal operation.
    Step 3
    Sometimes a startup failure can be resolved by resetting the NVRAM.
    Step 4
    If a desktop Mac hangs at a plain gray screen with a movable cursor, the keyboard may not be recognized. Press and hold the button on the side of an Apple wireless keyboard to make it discoverable. If need be, replace or recharge the batteries. If you're using a USB keyboard connected to a hub, connect it to a built-in port.
    Step 5
    If there's a built-in optical drive, a disc may be stuck in it. Follow these instructions to eject it.
    Step 6
    Press and hold the power button until the power shuts off. Disconnect all wired peripherals except those needed to start up, and remove all aftermarket expansion cards. Use a different keyboard and/or mouse, if those devices are wired. If you can start up now, one of the devices you disconnected, or a combination of them, is causing the problem. Finding out which one is a process of elimination.
    Step 7
    If you've started from an external storage device, make sure that the internal startup volume is selected in the Startup Disk pane of System Preferences.
    Start up in safe mode. Note: If FileVault is enabled in OS X 10.9 or earlier, or if a firmware password is set, or if the startup volume is a software RAID, you can’t do this. Post for further instructions.
    Safe mode is much slower to start and run than normal, and some things won’t work at all, including wireless networking on certain Macs.
    The login screen appears even if you usually log in automatically. You must know the login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.
    When you start up in safe mode, it's normal to see a dark gray progress bar on a light gray background. If the progress bar gets stuck for more than a few minutes, or if the system shuts down automatically while the progress bar is displayed, the startup volume is corrupt and the drive is probably malfunctioning. In that case, go to Step 11. If you ever have another problem with the drive, replace it immediately.
    If you can start and log in in safe mode, empty the Trash, and then open the Finder Info window on the startup volume ("Macintosh HD," unless you gave it a different name.) Check that you have at least 9 GB of available space, as shown in the window. If you don't, copy as many files as necessary to another volume (not another folder on the same volume) and delete the originals. Deletion isn't complete until you empty the Trash again. Do this until the available space is more than 9 GB. Then restart as usual (i.e., not in safe mode.)
    If the startup process hangs again, the problem is likely caused by a third-party system modification that you installed. Post for further instructions.
    Step 8
    Launch Disk Utility in Recovery mode (see Step 1.) Select the startup volume, then run Repair Disk. If any problems are found, repeat until clear. If Disk Utility reports that the volume can't be repaired, the drive has malfunctioned and should be replaced. You might choose to tolerate one such malfunction in the life of the drive. In that case, erase the volume and restore from a backup. If the same thing ever happens again, replace the drive immediately.
    This is one of the rare situations in which you should also run Repair Permissions, ignoring the false warnings it may produce. Look for the line "Permissions repair complete" at the end of the output. Then restart as usual.
    Step 9
    If the startup device is an aftermarket SSD, it may need a firmware update and/or a forced "garbage collection." Instructions for doing this with a Crucial-branded SSD were posted here. Some of those instructions may apply to other brands of SSD, but you should check with the vendor's tech support.  
    Step 10
    Reinstall the OS. If the Mac was upgraded from an older version of OS X, you’ll need the Apple ID and password you used to upgrade.
    Step 11
    Do as in Step 9, but this time erase the startup volume in Disk Utility before installing. The system should automatically restart into the Setup Assistant. Follow the prompts to transfer the data from a Time Machine or other backup.
    Step 12
    This step applies only to models that have a logic-board ("PRAM") battery: all Mac Pro's and some others (not current models.) Both desktop and portable Macs used to have such a battery. The logic-board battery, if there is one, is separate from the main battery of a portable. A dead logic-board battery can cause a startup failure. Typically the failure will be preceded by loss of the settings for the startup disk and system clock. See the user manual for replacement instructions. You may have to take the machine to a service provider to have the battery replaced.
    Step 13
    If you get this far, you're probably dealing with a hardware fault. Make a "Genius" appointment at an Apple Store, or go to another authorized service provider.

  • HT204408 When trying to log into face time I get the registering device does not have appropriate credentials, I have OSX 10.8.4 on my Mac Pro. I can log in using my IPAD

    When trying to log into face time I get the registering device does not have appropriate credentials after I sign in, I have OSX 10.8.4 on my Mac Pro. I can log into face time without a problem using my IPAD.>

    Please take each of the following steps that you haven't already tried, until the issue is reolved. If there's no resolution after Step 3, post your results.
    Step 1
    Sign out of iMessage in the Accounts tab of the preferences dialog, then sign back in.
    Step 2
    Log out of your user account and log back in.
    Step 3
    Boot in safe mode and test, then reboot as usual and test again.
    Note: If FileVault is enabled on some models, or if a firmware password is set, or if the boot volume is a software RAID, you can’t do this. Ask for further instructions.
    Safe mode is much slower to boot and run than normal, and some things won’t work at all, including sound output and  Wi-Fi on certain iMacs. The next normal boot may also be somewhat slow.
    The login screen appears even if you usually log in automatically. You must know your login password in order to log in. If you’ve forgotten the password, you will need to reset it before you begin.

  • Query for log Parser to get number of hits in a day or week for particular web applications or site collection

    Hi All,
    Want to get the number of hits in a day for a web application with IIS logs. so need to know Query for log Parser to get number hits in a day or week for particular web applications or site collection. Kindly help
    Regards,
    Naveen

    I'm trying to get this from WSS 3.0, Hence using the Log Parser

  • Is it possible to query a collection and get the last logged on user?

    Hi
    Is it possible in SCCM to query a collection to get the computer names and the last logon user for each computer? I cant seem to find how to do this?
    Thank you

    try this out
    Select distinct   
    v_R_System.Netbios_Name0 AS [Computer Name],  
    v_GS_COMPUTER_SYSTEM.UserName0 AS [User Name]
     from v_R_System
    left join v_GS_COMPUTER_SYSTEM on (v_GS_COMPUTER_SYSTEM.ResourceID = v_R_System.ResourceID) 
    inner join v_fullcollectionmembership as b on (b.ResourceID = v_R_System.ResourceID) 
    left join v_GS_SYSTEM_CONSOLE_USAGE_MAXGROUP ON v_GS_SYSTEM_CONSOLE_USAGE_MAXGROUP.ResourceID = v_R_System.ResourceID
            inner join v_GS_SYSTEM e on e.resourceid = b.resourceid
     Where
    b.CollectionID = @collectionid
    Blog: http://theinfraguys.com
    Follow me at Facebook
    The Infra Guys Facebook Page
    Please remember to click Mark as Answer on the answer if it helps you in anyway

  • Query a log for error in the last 30 minutes

    Hi,
    I'm trying to parse a plain text log file for errors and then send an email.
    The error code I'm looking for is "297 WARN". I only want to check the last 30 minutes of the log file and will run the PowerShell parsing script every 30 mins.
    I'm stuck at the bit where I make PowerShell only scan back the last 30 minutes. When I run the script below without the substring it works but parses the whole log file. How can I get it to stop parsing at the 30 minute timeline??
    The script I have so far is as follows:
    # Set Time Variable
    $Date = Get-Date
    $dateStr = $Date.AddMinutes(-30).ToString("yyyy-MM-dd HH:mm")
    #Set log file location
    $File = "C:\log.txt"
    # Get content from log to time
    $logfile = Get-Content $File | ? { $_.substring(0,11) -ge $datestr }  | Select-String "297 WARN"
    if ($logfile) {
      foreach ($line in $logfile){
        Write-Host $line
    <# 
      ###Send Email
        $mailMessage = New-Object System.Net.Mail.MailMessage
        $mailMessage.IsBodyHTML = $true
        $mailMessage.From = "$email"
        $mailMessage.Subject = "Log Checked with Error"
        $SMTPServer = "smtp.mailserver.com"
        $SMTPClient = New-Object Net.Mail.SmtpClient($SmtpServer, 25)
        $mailMessage.To.Add("[email protected]")
        $mailMessage.Body = $line
        $SMTPClient.Send($mailMessage)
        $LogAttachment.Dispose()
    #>
      } else {
       Write-Host "$($Date): no errors"
    The log file has the following format (two lines shown):
    2015-01-21 16:58:43,148 INFO  [ActivityInstanceBean] (CarnotApplicationQueueListenerContainer-1) State change for Activity instance 'Warten',  oid: 1853 (process instance = 259) (model oid = 2, version = 4.1.6, revision = 0): Created-->Hibernated.
    2015-01-21 16:59:00,297 WARN  [SQL] (CarnotDaemonQueueListenerContainer-4) Failed query: SELECT oid, type, code, stamp, state, partition FROM dbo.daemon_log WHERE (dbo.daemon_log.type = 'event.daemon' AND dbo.daemon_log.code = 0 AND dbo.daemon_log.partition
    = 1)
    Thanks very much fro any help!

    Hi Dforager,
    To parse the log file basd on the datatime, please refer to the script below to start:
    $file = "e:\log.txt"
    $Date = Get-Date
    Get-Content $file |
    Select-String "297 WARN" -SimpleMatch |
    select -expand line |
    foreach {
    if($_ -match '(.+),297 WARN\s(.+)'){
    new-object psobject -Property @{
    Date = [datetime]$matches[1]
    Error = $matches[2]}|
    where {$_.Date -gt $Date.AddMinutes(-30)}
    If there is anything else regarding this script, please feel free to post back.
    Best Regards,
    Anna Wang
    Please remember to mark the replies as answers if they help and unmark them if they provide no help. If you have feedback for TechNet Support, contact [email protected]

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • SharePoint TempDB.mdf growing too large? I have to restart SQL Server all the time. Please help

    Hi there,
    On our DEV SharePoint farm > SQL server
    The tempdb.mdf size grows too quickly and too much. I am tired of increasing the space and cannot do that anymore.
    All the time I have to reboot the SQL server to get tempdb to normal size.
    The Live farm is okay (with similar data) so it must be something wrong with our
    DEV environment.
    Any idea how to fix this please?
    Thanks so much.

    How do you get the tempdb to 'normal size'? How large is large and how small is normal.
    Have you put the databases in simple recovery mode? It's normal for dev environments to not have the required transaction log backups to keep the ldf files in check. That won't affect the tempdb but if you've got bigger issues then that might be a symptom.
    Have you turned off autogrowth for the temp DB?

  • Use SQL function to get the original order number using the invoice number

    Hi All,
    wondering is someone can help me with this challenge I am having?  Often I need to return the original order numbers that created the resulting invoce.  This is a relatively simple seriese of joins in a query but I am wanting to simplify it using a SQL function that can be referenced each time easily from with in the SELECT statement.  the code i currently have is:
    Use SQL function to get the original order number using the invoice number
    CREATE FUNCTION dbo.fnOrdersThatMakeInvoice(@InvNum int)
    RETURNS nvarchar(200)
    AS
    BEGIN
    DECLARE @OrderList nvarchar(200)
    SET @OrderList = ''
    SELECT @OrderList = @OrderList + (cast(T6.DocNum AS nvarchar(10)) + ' ')
    FROM  OINV AS T1 INNER JOIN
          INV1 AS T2 ON T1.DocEntry = T2.DocEntry INNER JOIN
          DLN1 AS T4 ON T2.BaseEntry = T4.DocEntry AND T2.BaseLine = T4.LineNum INNER JOIN
          RDR1 AS T5 ON T4.BaseEntry = T5.DocEntry AND T4.BaseLine = T5.LineNum INNER JOIN
          ORDR AS T6 ON T5.DocEntry = T6.DocEntry
    WHERE T1.DocNum = @InvNum
    RETURN @OrderList 
    END
    it is run by the following query:
    Select T1.DocNum, dbo.fnOrdersThatMakeInvoice(T1.DocNum)
    From OINV T1
    Where T1.DocNum = 'your invoice number here'
    The issue is that this returns the order number for all of the lines in the invoice.  Only want to see the summary of the order numbers.  ie if 3 orders were used to make a 20 line inovice I only want to see the 3 order numbers retuned in the field.
    If this was a simple reporting SELECT query I would use SELECT DISTINCT.  But I can't do that.
    Any ideas?
    Thanks,
    Mike

    Thanks Gordon,
    I am trying to get away from the massive table access list everytime I write a query where I need to access the original order number of the invoice.  However, I have managed to solve my own problem with a GROUP BY statement!
    Others may be interested so, the code is this:
    CREATE FUNCTION dbo.fnOrdersThatMakeInvoice(@InvNum int)
    RETURNS nvarchar(200)
    AS
    BEGIN
    DECLARE @OrderList nvarchar(200)
    SET @OrderList = ''
    SELECT @OrderList = @OrderList + (cast(T6.DocNum AS nvarchar(10)) + ' ')
    FROM  OINV AS T1 INNER JOIN
          INV1 AS T2 ON T1.DocEntry = T2.DocEntry INNER JOIN
          DLN1 AS T4 ON T2.BaseEntry = T4.DocEntry AND T2.BaseLine = T4.LineNum INNER JOIN
          RDR1 AS T5 ON T4.BaseEntry = T5.DocEntry AND T4.BaseLine = T5.LineNum INNER JOIN
          ORDR AS T6 ON T5.DocEntry = T6.DocEntry
    WHERE T1.DocNum = @InvNum
    GROUP BY T6.DocNum
    RETURN @OrderList 
    END
    and to call it use this:
    Select T1.DocNum, dbo.fnOrdersThatMakeInvoice(T1.DocNum)
    From OINV T1
    Where T1.DocNum = 'your invoice number'

  • Why this Query is taking much longer time than expected?

    Hi,
    I need experts support on the below mentioned issue:
    Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time.  Below, please find the DDL & DML:
    DDL
    BHDCollections
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BHDCollections](
     [BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
     [GroupMemberid] [int] NOT NULL,
     [BHDDate] [datetime] NOT NULL,
     [BHDShift] [varchar](10) NULL,
     [SlipValue] [decimal](18, 3) NOT NULL,
     [ProcessedValue] [decimal](18, 3) NOT NULL,
     [BHDRemarks] [varchar](500) NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
     [BHDCollectionid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    BHDCollectionsDet
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[BHDCollectionsDet](
     [CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
     [BHDCollectionid] [bigint] NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](18, 3) NOT NULL,
     [Quantity] [int] NOT NULL,
     CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
     [CollectionDetailid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    Banks
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Banks](
     [Bankid] [int] IDENTITY(1,1) NOT NULL,
     [Bankname] [varchar](50) NOT NULL,
     [Bankabbr] [varchar](50) NULL,
     [BankContact] [varchar](50) NULL,
     [BankTel] [varchar](25) NULL,
     [BankFax] [varchar](25) NULL,
     [BankEmail] [varchar](50) NULL,
     [BankActive] [bit] NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
     [Bankid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    Groupmembers
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[GroupMembers](
     [GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
     [Groupid] [int] NOT NULL,
     [BAID] [int] NOT NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
     [GroupMemberid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
    REFERENCES [dbo].[BankAccounts] ([BAID])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
    REFERENCES [dbo].[Groups] ([Groupid])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
    BankAccounts
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BankAccounts](
     [BAID] [int] IDENTITY(1,1) NOT NULL,
     [CustomerID] [int] NOT NULL,
     [Locationid] [varchar](25) NOT NULL,
     [Bankid] [int] NOT NULL,
     [BankAccountNo] [varchar](50) NOT NULL,
     CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
     [BAID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
    REFERENCES [dbo].[Banks] ([Bankid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
    REFERENCES [dbo].[Locations] ([Locationid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
    Currency
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Currency](
     [Currencyid] [int] IDENTITY(1,1) NOT NULL,
     [CurrencyISOCode] [varchar](20) NOT NULL,
     [CurrencyCountry] [varchar](50) NULL,
     [Currency] [varchar](50) NULL,
     CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
     [Currencyid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    CurrencyDetails
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[CurrencyDetails](
     [CurDenid] [int] IDENTITY(1,1) NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](15, 3) NOT NULL,
     [DenominationType] [varchar](25) NOT NULL,
     CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
     [CurDenid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    QUERY
    WITH TEMP_TABLE AS
    SELECT     0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
    UNION ALL
    SELECT     BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
    TEMP_TABLE2 AS
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
    SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
    HAVING COUNT(DSLIPS)<>0;

    Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
    Just
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM
    #tmp Group By CollectionDate,DSLIPS,Bankname
    HAVING COUNT(DSLIPS)<>0;
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Ask for a query SQL

    Hi,
    The schema's definition is:
    A
    =======================
    UserID
    SubTime
    ActiveTime
    DeactiveTime
    ExpireTime
    =======================
    I need to get records in the following format:
    2003-01-01,<userid>,<action type>
    The <action type> should be 1 or 2 or 3 or 4 which indicate the time is 'SubTime' or 'ActiveTime' or 'DeactiveTime' or 'ExpireTime'.
    For example, there is a record in database:
    UserID SubTime ActiveTime DeactiveTime ExpireTime
    1 2004-01-01 2004-01-02 2004-01-03 2004-01-04
    The search results should be:
    2004-01-01, 1, 1
    2004-01-02, 1, 2
    2004-01-03, 1, 3
    2004-01-04, 1, 4
    The search result should be order by time.
    The size of the table is very big, can anyone suggest a high performance query SQL? Thank you very much in advance!
    BR/Shirley

    Hi,
    select time,userid,action_type from
    (select UserID,SubTime,1 action_type from utab
    union all
    select UserID,ActiveTime ,2 from utab where where activeTime is not null
    union all
    select UserID,DeactiveTime ,3 from utab where where dectiveTime is not null
    union all
    select UserID,ExpireTime ,4 from utab where where ExpireTime is not null)
    order by 1
    Please chek with Perfermance.
    Bye
    Chitta

Maybe you are looking for

  • Pages 08 Comments

    While using Pages 08 is there any way to change the color of the comment boxes? I have selected the comment, opened the inspector to the graphic tab, and can see where the fill is listed as gradient with the two colors of yellow, but I cannot select

  • To Hibernate or Not??  Using CF's ORM a good move?

    Hi, I've always gotten by fine by writing stored procedure calls in my CF apps and have never really tried using an ORM to manage things and was wondering what others experiences are?  It seems like I would like some of the re-use of my stored proced

  • 1311 error and missing set up file

    There were no places results found for I am getting a 1311 error and am missing the set up file for Adobe. Without the set up file I can't update or fix it and am having problems with PDF files. Is there a way to re-install the set up file?

  • Problem with changing background colors in JPanels

    Hi, I have made an applet with Grid Layout and I have added JPanels in each grid. I have assigned a background color to all panels. Now what i am trying to do is, change the clicked JPanel's background color to "blue" on a mouse click and convert the

  • Not able to login to Import Wizard - BO XI3.1 SP4

    Hi, While login to Import wizard, I am having the following error: The username and password is correct. I have Full Control right for the user in the folders. Am I missing any right ? Thanks and Regards, Arijit