Error in sql query as "loop has run more times than expected (Loop Counter went negative)"

Hello,
When I run the query as below
DECLARE @LoopCount int
SET @LoopCount = (SELECT Count(*) FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL)
WHILE (
    SELECT Count(*)
    FROM KC_PaymentTransactionIDConversion with (nolock)
    Where KC_Transaction_ID is NULL
    and TransactionYear is NOT NULL
) > 0
BEGIN
    IF @LoopCount < 0
        RAISERROR ('Issue with data in KC_PaymentTransactionIDConversion, loop has run more times than expected (Loop Counter went negative).', -- Message text.
               16, -- Severity.
               1 -- State.
SET @LoopCount = @LoopCount - 1
end
I am getting error as "loop has run more times than expected (Loop Counter went negative)"
Could any one help on this issue ASAP.
Thanks ,
Vinay

Hi Vinay,
According to your code above, the error message make sense. Because once the value returned by “SELECT Count(*)  FROM KC_PaymentTransactionIDConversion with (nolock) Where KC_Transaction_ID is NULL and TransactionYear is NOT NULL” is bigger than 0,
then decrease @LoopCount. Without changing the table data, the returned value always bigger than 0, always decrease @LoopCount until it's negative and raise the error.
To fix this issue with the current information, we should make the following modification:
Change the code
WHILE (
SELECT Count(*)
FROM KC_PaymentTransactionIDConversion with (nolock)
Where KC_Transaction_ID is NULL
and TransactionYear is NOT NULL
) > 0
To
WHILE @LoopCount > 0
Besides, since the current query is senseless, please modify the query based on your requirement.
If there are any other questions, please feel free to ask.
Thanks,
Katherine Xiong
Katherine Xiong
TechNet Community Support

Similar Messages

  • Less sql query cost ... but more time to fetch the records...

    Hi,
    I faced up to a 'peculiar' situation with a costly db view.
    I attempted to reduce the total view cost
    specifically for 223 records fetched{the cost from 149 reduced to 74,
                                                           the recursive calls from 796 reduced to 224,
                                                           the consistent gets from 311516 reduced to 310341,
                                                            the physical reads from 7 reduced to 0}
    but the amount of time needed to fetch the results is greater than the old version of the db view....{it may be the double...}
    Have you any idea about this...????
    Note: I have got fresh statistics...
             I use db 10g v.2
    Thanks,
    Sim

    Try tracing the query and see what tkprofs shows you.
    alter session set events '10046 trace name context forever, level 12';
    run the query
    alter session set events '10046 trace name context off';
    Then run tkprofs on the trace file produced to see where the database is spending its time/effort. Do likewise for the baseline query in a different session (so you will generate a different trace file).
    If that does not produce anything useful, try using
    alter session set events '10053 trace name context forever';
    run the query
    alter session set events '10053 trace name context off';
    Then examine the trace file to see if you can learn anything.
    Since you have not given us anything more to go on, that is about all the help I can give....

  • When will there be fix for AirPrint in Yosemite? I cannot use my HP printer. I have uninstalled/resinstalled everything you can think of. I have deleted and added my printer more time than I can count. I have checked for updates. Nothing has worked.

    When will there be fix for AirPrint? Since upgrading to Yosemite I cannot print from my MacBook Air which I use mostly. (I can print from the my old 2005 G5 desktop and iPad2 just fine on the same network.) I could print just fine when I had Mavericks. I have tried EVERYTHING under the sun: resetting print system, deleting and adding the printer, checking for updates, uninstalling and reinstalling printer software and folders, turning off modem, router etc. NOTHING works.
    The laptop shows that it is printing but nothing happens. Then it says it cannot locate the printer or there is a broken pipe. But it does tell me that I have a low level in one of the ink cartridges (which is true.)
    Help! I have warned everyone I know not to upgrade. Is there a way to go back to Mavericks?
    MacBook Air (mid 2012), OS 10.10,  HP Officejet 7500a

    This is what appeared on Console:
    11/12/14 12:42:32.558 PM com.apple.preference.printfax.remoteservice[3099]: <NSViewServiceMarshal: 0x7fa398514bd0> failed to complete rights grant 6B212CA9-7872-4726-BA05-68BD5C994479 due to 3
    timestamp: 12:42:32.557 Wednesday 12 November 2014
    process/thread/queue: com.apple.preference.printfax.remoteservice (3099) / 0x7fff78cb4300 / com.apple.main-thread
    code: line 834 of /SourceCache/ViewBridge/ViewBridge-99/NSViewServiceMarshal.m in -[NSViewServiceMarshal invalidateWindowRights]
    #exceptions
    11/12/14 12:42:32.598 PM com.apple.xpc.launchd[1]: (com.apple.preference.printfax.remoteservice[3099]) Service exited due to signal: Killed: 9
    11/12/14 12:42:37.845 PM com.apple.appkit.xpc.openAndSavePanelService[3298]: assertion failed: 14A389: libxpc.dylib + 97940 [9437C02E-A07B-38C8-91CB-299FAA63083D]: 0x89
    11/12/14 12:42:37.879 PM sandboxd[246]: ([3295]) Pages(3295) deny file-read-data /Users/Mom/Downloads/CONM 136 - Study Guide Test 3 (2014).docx
    11/12/14 12:42:37.971 PM sandboxd[246]: ([3295]) Pages(3295) deny file-read-data /Users/Mom/Downloads/12%20angry%20men.docx
    11/12/14 12:42:38.033 PM sandboxd[246]: ([3295]) Pages(3295) deny file-read-data /Users/Mom/Downloads/Quiz%20#2.docx
    11/12/14 12:42:39.712 PM sandboxd[246]: ([3295]) Pages(3295) deny file-read-data /Users/Mom/Downloads/12%20angry%20men.docx
    11/12/14 12:42:39.000 PM kernel[0]: Sandbox: Pages(3295) deny file-read-data /Users/Mom/Downloads/Quiz%20#2.docx
    11/12/14 12:42:39.748 PM sandboxd[246]: ([3295]) Pages(3295) deny file-read-data /Users/Mom/Downloads/12%20angry%20men.docx
    11/12/14 12:42:39.850 PM sandboxd[246]: ([3295]) Pages(3295) deny file-read-data /Users/Mom/Downloads/12%20angry%20men.docx
    11/12/14 12:42:39.874 PM sandboxd[246]: ([3295]) Pages(3295) deny file-read-data /Users/Mom/Downloads/Quiz%20#2.docx
    11/12/14 12:42:39.945 PM sandboxd[246]: ([3295]) Pages(3295) deny file-read-data /Users/Mom/Downloads/Quiz%20#2.docx
    11/12/14 12:42:41.000 PM kernel[0]: Sandbox: storeuid(3051) deny mach-lookup com.apple.dock.server
    11/12/14 12:42:48.968 PM WindowServer[132]: WSGetSurfaceInWindow : Invalid surface 702894100 for window 1378
    11/12/14 12:42:48.968 PM WindowServer[132]: WSGetSurfaceInWindow : Invalid surface 702894100 for window 1378
    11/12/14 12:43:17.000 PM kernel[0]: Sandbox: storeuid(3051) deny mach-lookup com.apple.dock.server
    11/12/14 12:43:23.225 PM com.apple.xpc.launchd[1]: (com.apple.quicklook[3316]) Endpoint has been activated through legacy launch(3) APIs. Please switch to XPC or bootstrap_check_in(): com.apple.quicklook
    11/12/14 12:43:24.355 PM WindowServer[132]: disable_update_timeout: UI updates were forcibly disabled by application "PrinterProxy" for over 1.00 seconds. Server has re-enabled them.
    11/12/14 12:43:24.946 PM WindowServer[132]: common_reenable_update: UI updates were finally reenabled by application "PrinterProxy" after 1.59 seconds (server forcibly re-enabled them after 1.00 seconds)
    11/12/14 12:43:34.503 PM mdworker[3319]: code validation failed in the process of getting signing information: Error Domain=NSOSStatusErrorDomain Code=-67062 "The operation couldn’t be completed. (OSStatus error -67062.)"
    11/12/14 12:43:34.514 PM mdworker[3320]: code validation failed in the process of getting signing information: Error Domain=NSOSStatusErrorDomain Code=-67062 "The operation couldn’t be completed. (OSStatus error -67062.)"
    11/12/14 12:43:34.604 PM mdworker[3318]: code validation failed in the process of getting signing information: Error Domain=NSOSStatusErrorDomain Code=-67062 "The operation couldn’t be completed. (OSStatus error -67062.)"
    11/12/14 12:44:38.628 PM sandboxd[246]: ([3051]) storeuid(3051) deny mach-lookup com.apple.dock.server
    11/12/14 12:44:41.384 PM locationd[63]: Location icon should now be in state 'Active'
    11/12/14 12:44:44.653 PM sandboxd[246]: ([375]) com.apple.metada(375) deny mach-lookup com.apple.cfnetwork.cfnetworkagent
    11/12/14 12:44:44.793 PM sandboxd[246]: ([375]) com.apple.metada(375) deny mach-lookup com.apple.cfnetwork.cfnetworkagent
    11/12/14 12:44:45.149 PM sandboxd[246]: ([375]) com.apple.metada(375) deny mach-lookup com.apple.cfnetwork.cfnetworkagent
    11/12/14 12:44:56.515 PM Console[3327]: Failed to connect (_consoleX) outlet from (NSApplication) to (ConsoleX): missing setter or instance variable
    11/12/14 12:44:56.641 PM locationd[63]: Location icon should now be in state 'Inactive'
    11/12/14 12:44:56.834 PM sandboxd[246]: ([3051]) storeuid(3051) deny mach-lookup com.apple.dock.server
    Not sure what any of this means.

  • Why does crontab running more times ?

    I have crontab in the following, want to run myapp.pl  "Every 5 min between 13:00 and 15:00" .
    #Minute Hour   Day    Month  Dayofweek command
    #update daily tractions data
    */5    13-15    *    *    1-5        myapp.pl
    But from the log file following I found myapp.pl run more times than I had expected.
    Log:
       started on 2014-01-23 15:05:01
       started on 2014-01-23 15:10:01
       started on 2014-01-23 15:15:01
       started on 2014-01-23 15:20:01
       started on 2014-01-23 15:25:00
       started on 2014-01-23 15:30:01
       started on 2014-01-23 15:35:02
    Any ideas?  Thanks.
    Btw I dont want to use Launch Agent something.

    That will lost the time 15:00 .
    Probably I should write:
    */5 13-14 * * 1-5 myapp.pl
    0        15          * * 1-5 myapp.pl

  • Query on a table runs more than 45mins(after stats) and same query runs 19secs(before stats - rebuild)

    Query on a table runs more than 45mins(after stats) and same query runs 19secs(before stats - rebuild) - Not sure what the cause is.
    - Analysed the explain the plan
    - different explain plan used afterr stats gather.
    Any idea what could be the cause with this kind of difference.
    Thank you!

    What is the difference you see in the explain plan ? Where it spends most time. All these needs to be analysed.

  • Query in timesten taking more time than query in oracle database

    Hi,
    Can anyone please explain me why query in timesten taking more time
    than query in oracle database.
    I am mentioning in detail what are my settings and what have I done
    step by step.........
    1.This is the table I created in Oracle datababase
    (Oracle Database 10g Enterprise Edition Release 10.2.0.1.0)...
    CREATE TABLE student (
    id NUMBER(9) primary keY ,
    first_name VARCHAR2(10),
    last_name VARCHAR2(10)
    2.THIS IS THE ANONYMOUS BLOCK I USE TO
    POPULATE THE STUDENT TABLE(TOTAL 2599999 ROWS)...
    declare
    firstname varchar2(12);
    lastname varchar2(12);
    catt number(9);
    begin
    for cntr in 1..2599999 loop
    firstname:=(cntr+8)||'f';
    lastname:=(cntr+2)||'l';
    if cntr like '%9999' then
    dbms_output.put_line(cntr);
    end if;
    insert into student values(cntr,firstname, lastname);
    end loop;
    end;
    3. MY DSN IS SET THE FOLLWING WAY..
    DATA STORE PATH- G:\dipesh3repo\db
    LOG DIRECTORY- G:\dipesh3repo\log
    PERM DATA SIZE-1000
    TEMP DATA SIZE-1000
    MY TIMESTEN VERSION-
    C:\Documents and Settings\dipesh>ttversion
    TimesTen Release 7.0.3.0.0 (32 bit NT) (tt70_32:17000) 2007-09-19T16:04:16Z
    Instance admin: dipesh
    Instance home directory: G:\TimestTen\TT70_32
    Daemon home directory: G:\TimestTen\TT70_32\srv\info
    THEN I CONNECT TO THE TIMESTEN DATABASE
    C:\Documents and Settings\dipesh> ttisql
    command>connect "dsn=dipesh3;oraclepwd=tiger";
    4. THEN I START THE AGENT
    call ttCacheUidPwdSet('SCOTT','TIGER');
    Command> CALL ttCacheStart();
    5.THEN I CREATE THE READ ONLY CACHE GROUP AND LOAD IT
    create readonly cache group rc_student autorefresh
    interval 5 seconds from student
    (id int not null primary key, first_name varchar2(10), last_name varchar2(10));
    load cache group rc_student commit every 100 rows;
    6.NOW I CAN ACCESS THE TABLES FROM TIMESTEN AND PERFORM THE QUERY
    I SET THE TIMING..
    command>TIMING 1;
    consider this query now..
    Command> select * from student where first_name='2155666f';
    < 2155658, 2155666f, 2155660l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.668822 seconds.
    another query-
    Command> SELECT * FROM STUDENTS WHERE FIRST_NAME='2340009f';
    2206: Table SCOTT.STUDENTS not found
    Execution time (SQLPrepare) = 0.074964 seconds.
    The command failed.
    Command> SELECT * FROM STUDENT where first_name='2093434f';
    < 2093426, 2093434f, 2093428l >
    1 row found.
    Execution time (SQLExecute + Fetch Loop) = 0.585897 seconds.
    Command>
    7.NOW I PERFORM THE SIMILAR QUERIES FROM SQLPLUS...
    SQL> SELECT * FROM STUDENT WHERE FIRST_NAME='1498671f';
    ID FIRST_NAME LAST_NAME
    1498663 1498671f 1498665l
    Elapsed: 00:00:00.15
    Can anyone please explain me why query in timesten taking more time
    that query in oracle database.
    Message was edited by: Dipesh Majumdar
    user542575
    Message was edited by:
    user542575

    TimesTen
    Hardware: Windows Server 2003 R2 Enterprise x64; 8 x Dual-core AMD 8216 2.41GHz processors; 32 GB RAM
    Version: 7.0.4.0.0 64 bit
    Schema:
    create usermanaged cache group factCache from
    MV_US_DATAMART
    ORDER_DATE               DATE,
    IF_SYSTEM               VARCHAR2(32) NOT NULL,
    GROUPING_ID                TT_BIGINT,
    TIME_DIM_ID               TT_INTEGER NOT NULL,
    BUSINESS_DIM_ID          TT_INTEGER NOT NULL,
    ACCOUNT_DIM_ID               TT_INTEGER NOT NULL,
    ORDERTYPE_DIM_ID          TT_INTEGER NOT NULL,
    INSTR_DIM_ID               TT_INTEGER NOT NULL,
    EXECUTION_DIM_ID          TT_INTEGER NOT NULL,
    EXEC_EXCHANGE_DIM_ID TT_INTEGER NOT NULL,
    NO_ORDERS               TT_BIGINT,
    FILLED_QUANTITY          TT_BIGINT,
    CNT_FILLED_QUANTITY          TT_BIGINT,
    QUANTITY               TT_BIGINT,
    CNT_QUANTITY               TT_BIGINT,
    COMMISSION               BINARY_FLOAT,
    CNT_COMMISSION               TT_BIGINT,
    FILLS_NUMBER               TT_BIGINT,
    CNT_FILLS_NUMBER          TT_BIGINT,
    AGGRESSIVE_FILLS          TT_BIGINT,
    CNT_AGGRESSIVE_FILLS          TT_BIGINT,
    NOTIONAL               BINARY_FLOAT,
    CNT_NOTIONAL               TT_BIGINT,
    TOTAL_PRICE               BINARY_FLOAT,
    CNT_TOTAL_PRICE          TT_BIGINT,
    CANCELLED_ORDERS_COUNT          TT_BIGINT,
    CNT_CANCELLED_ORDERS_COUNT     TT_BIGINT,
    ROUTED_ORDERS_NO          TT_BIGINT,
    CNT_ROUTED_ORDERS_NO          TT_BIGINT,
    ROUTED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ROUTED_LIQUIDITY_QTY     TT_BIGINT,
    REMOVED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_REMOVED_LIQUIDITY_QTY     TT_BIGINT,
    ADDED_LIQUIDITY_QTY          TT_BIGINT,
    CNT_ADDED_LIQUIDITY_QTY     TT_BIGINT,
    AGENT_CHARGES               BINARY_FLOAT,
    CNT_AGENT_CHARGES          TT_BIGINT,
    CLEARING_CHARGES          BINARY_FLOAT,
    CNT_CLEARING_CHARGES          TT_BIGINT,
    EXECUTION_CHARGES          BINARY_FLOAT,
    CNT_EXECUTION_CHARGES          TT_BIGINT,
    TRANSACTION_CHARGES          BINARY_FLOAT,
    CNT_TRANSACTION_CHARGES     TT_BIGINT,
    ORDER_MANAGEMENT          BINARY_FLOAT,
    CNT_ORDER_MANAGEMENT          TT_BIGINT,
    SETTLEMENT_CHARGES          BINARY_FLOAT,
    CNT_SETTLEMENT_CHARGES          TT_BIGINT,
    RECOVERED_AGENT          BINARY_FLOAT,
    CNT_RECOVERED_AGENT          TT_BIGINT,
    RECOVERED_CLEARING          BINARY_FLOAT,
    CNT_RECOVERED_CLEARING          TT_BIGINT,
    RECOVERED_EXECUTION          BINARY_FLOAT,
    CNT_RECOVERED_EXECUTION     TT_BIGINT,
    RECOVERED_TRANSACTION          BINARY_FLOAT,
    CNT_RECOVERED_TRANSACTION     TT_BIGINT,
    RECOVERED_ORD_MGT          BINARY_FLOAT,
    CNT_RECOVERED_ORD_MGT          TT_BIGINT,
    RECOVERED_SETTLEMENT          BINARY_FLOAT,
    CNT_RECOVERED_SETTLEMENT     TT_BIGINT,
    CLIENT_AGENT               BINARY_FLOAT,
    CNT_CLIENT_AGENT          TT_BIGINT,
    CLIENT_ORDER_MGT          BINARY_FLOAT,
    CNT_CLIENT_ORDER_MGT          TT_BIGINT,
    CLIENT_EXEC               BINARY_FLOAT,
    CNT_CLIENT_EXEC          TT_BIGINT,
    CLIENT_TRANS               BINARY_FLOAT,
    CNT_CLIENT_TRANS          TT_BIGINT,
    CLIENT_CLEARING          BINARY_FLOAT,
    CNT_CLIENT_CLEARING          TT_BIGINT,
    CLIENT_SETTLE               BINARY_FLOAT,
    CNT_CLIENT_SETTLE          TT_BIGINT,
    CHARGEABLE_TAXES          BINARY_FLOAT,
    CNT_CHARGEABLE_TAXES          TT_BIGINT,
    VENDOR_CHARGE               BINARY_FLOAT,
    CNT_VENDOR_CHARGE          TT_BIGINT,
    ROUTING_CHARGES          BINARY_FLOAT,
    CNT_ROUTING_CHARGES          TT_BIGINT,
    RECOVERED_ROUTING          BINARY_FLOAT,
    CNT_RECOVERED_ROUTING          TT_BIGINT,
    CLIENT_ROUTING               BINARY_FLOAT,
    CNT_CLIENT_ROUTING          TT_BIGINT,
    TICKET_CHARGES               BINARY_FLOAT,
    CNT_TICKET_CHARGES          TT_BIGINT,
    RECOVERED_TICKET_CHARGES     BINARY_FLOAT,
    CNT_RECOVERED_TICKET_CHARGES     TT_BIGINT,
    PRIMARY KEY(ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID, INSTR_DIM_ID, EXECUTION_DIM_ID,EXEC_EXCHANGE_DIM_ID),
    READONLY);
    No of rows: 2228558
    Config:
    < CkptFrequency, 600 >
    < CkptLogVolume, 0 >
    < CkptRate, 0 >
    < ConnectionCharacterSet, US7ASCII >
    < ConnectionName, tt_us_dma >
    < Connections, 64 >
    < DataBaseCharacterSet, AL32UTF8 >
    < DataStore, e:\andrew\datacache\usDMA >
    < DurableCommits, 0 >
    < GroupRestrict, <NULL> >
    < LockLevel, 0 >
    < LockWait, 10 >
    < LogBuffSize, 65536 >
    < LogDir, e:\andrew\datacache\ >
    < LogFileSize, 64 >
    < LogFlushMethod, 1 >
    < LogPurge, 0 >
    < Logging, 1 >
    < MemoryLock, 0 >
    < NLS_LENGTH_SEMANTICS, BYTE >
    < NLS_NCHAR_CONV_EXCP, 0 >
    < NLS_SORT, BINARY >
    < OracleID, NYCATP1 >
    < PassThrough, 0 >
    < PermSize, 4000 >
    < PermWarnThreshold, 90 >
    < PrivateCommands, 0 >
    < Preallocate, 0 >
    < QueryThreshold, 0 >
    < RACCallback, 0 >
    < SQLQueryTimeout, 0 >
    < TempSize, 514 >
    < TempWarnThreshold, 90 >
    < Temporary, 1 >
    < TransparentLoad, 0 >
    < TypeMode, 0 >
    < UID, OS_OWNER >
    ORACLE:
    Hardware: Sunos 5.10; 24x1.8Ghz (unsure of type); 82 GB RAM
    Version 10g Enterprise Edition Release 10.2.0.3.0 - 64bit Production
    Schema:
    CREATE MATERIALIZED VIEW OS_OWNER.MV_US_DATAMART
    TABLESPACE TS_OS
    PARTITION BY RANGE (ORDER_DATE)
    PARTITION MV_US_DATAMART_MINVAL VALUES LESS THAN (TO_DATE(' 2007-11-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D1 VALUES LESS THAN (TO_DATE(' 2007-11-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D2 VALUES LESS THAN (TO_DATE(' 2007-11-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_NOV_D3 VALUES LESS THAN (TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D1 VALUES LESS THAN (TO_DATE(' 2007-12-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D2 VALUES LESS THAN (TO_DATE(' 2007-12-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_07_DEC_D3 VALUES LESS THAN (TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D1 VALUES LESS THAN (TO_DATE(' 2008-01-11 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D2 VALUES LESS THAN (TO_DATE(' 2008-01-21 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_08_JAN_D3 VALUES LESS THAN (TO_DATE(' 2008-02-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN'))
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS,
    PARTITION MV_US_DATAMART_MAXVAL VALUES LESS THAN (MAXVALUE)
    LOGGING
    NOCOMPRESS
    TABLESPACE TS_OS
    NOCACHE
    NOCOMPRESS
    NOPARALLEL
    BUILD DEFERRED
    USING INDEX
    TABLESPACE TS_OS_INDEX
    REFRESH FAST ON DEMAND
    WITH PRIMARY KEY
    ENABLE QUERY REWRITE
    AS
    SELECT order_date, if_system,
    GROUPING_ID (order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id
    ) GROUPING_ID,
    /* ============ DIMENSIONS ============ */
    time_dim_id, business_dim_id, account_dim_id, ordertype_dim_id,
    instr_dim_id, execution_dim_id, exec_exchange_dim_id,
    /* ============ MEASURES ============ */
    -- o.FX_RATE /* FX_RATE */,
    COUNT (*) no_orders,
    -- SUM(NO_ORDERS) NO_ORDERS,
    -- COUNT(NO_ORDERS) CNT_NO_ORDERS,
    SUM (filled_quantity) filled_quantity,
    COUNT (filled_quantity) cnt_filled_quantity, SUM (quantity) quantity,
    COUNT (quantity) cnt_quantity, SUM (commission) commission,
    COUNT (commission) cnt_commission, SUM (fills_number) fills_number,
    COUNT (fills_number) cnt_fills_number,
    SUM (aggressive_fills) aggressive_fills,
    COUNT (aggressive_fills) cnt_aggressive_fills,
    SUM (fx_rate * filled_quantity * average_price) notional,
    COUNT (fx_rate * filled_quantity * average_price) cnt_notional,
    SUM (fx_rate * fills_number * average_price) total_price,
    COUNT (fx_rate * fills_number * average_price) cnt_total_price,
    SUM (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END) cancelled_orders_count,
    COUNT (CASE
    WHEN order_status = 'C'
    THEN 1
    ELSE 0
    END
    ) cnt_cancelled_orders_count,
    -- SUM(t.FX_RATE*t.NO_FILLS*t.AVG_PRICE) AVERAGE_PRICE,
    -- SUM(FILLS_NUMBER*AVERAGE_PRICE) STAGING_AVERAGE_PRICE,
    -- COUNT(FILLS_NUMBER*AVERAGE_PRICE) CNT_STAGING_AVERAGE_PRICE,
    SUM (routed_orders_no) routed_orders_no,
    COUNT (routed_orders_no) cnt_routed_orders_no,
    SUM (routed_liquidity_qty) routed_liquidity_qty,
    COUNT (routed_liquidity_qty) cnt_routed_liquidity_qty,
    SUM (removed_liquidity_qty) removed_liquidity_qty,
    COUNT (removed_liquidity_qty) cnt_removed_liquidity_qty,
    SUM (added_liquidity_qty) added_liquidity_qty,
    COUNT (added_liquidity_qty) cnt_added_liquidity_qty,
    SUM (agent_charges) agent_charges,
    COUNT (agent_charges) cnt_agent_charges,
    SUM (clearing_charges) clearing_charges,
    COUNT (clearing_charges) cnt_clearing_charges,
    SUM (execution_charges) execution_charges,
    COUNT (execution_charges) cnt_execution_charges,
    SUM (transaction_charges) transaction_charges,
    COUNT (transaction_charges) cnt_transaction_charges,
    SUM (order_management) order_management,
    COUNT (order_management) cnt_order_management,
    SUM (settlement_charges) settlement_charges,
    COUNT (settlement_charges) cnt_settlement_charges,
    SUM (recovered_agent) recovered_agent,
    COUNT (recovered_agent) cnt_recovered_agent,
    SUM (recovered_clearing) recovered_clearing,
    COUNT (recovered_clearing) cnt_recovered_clearing,
    SUM (recovered_execution) recovered_execution,
    COUNT (recovered_execution) cnt_recovered_execution,
    SUM (recovered_transaction) recovered_transaction,
    COUNT (recovered_transaction) cnt_recovered_transaction,
    SUM (recovered_ord_mgt) recovered_ord_mgt,
    COUNT (recovered_ord_mgt) cnt_recovered_ord_mgt,
    SUM (recovered_settlement) recovered_settlement,
    COUNT (recovered_settlement) cnt_recovered_settlement,
    SUM (client_agent) client_agent,
    COUNT (client_agent) cnt_client_agent,
    SUM (client_order_mgt) client_order_mgt,
    COUNT (client_order_mgt) cnt_client_order_mgt,
    SUM (client_exec) client_exec, COUNT (client_exec) cnt_client_exec,
    SUM (client_trans) client_trans,
    COUNT (client_trans) cnt_client_trans,
    SUM (client_clearing) client_clearing,
    COUNT (client_clearing) cnt_client_clearing,
    SUM (client_settle) client_settle,
    COUNT (client_settle) cnt_client_settle,
    SUM (chargeable_taxes) chargeable_taxes,
    COUNT (chargeable_taxes) cnt_chargeable_taxes,
    SUM (vendor_charge) vendor_charge,
    COUNT (vendor_charge) cnt_vendor_charge,
    SUM (routing_charges) routing_charges,
    COUNT (routing_charges) cnt_routing_charges,
    SUM (recovered_routing) recovered_routing,
    COUNT (recovered_routing) cnt_recovered_routing,
    SUM (client_routing) client_routing,
    COUNT (client_routing) cnt_client_routing,
    SUM (ticket_charges) ticket_charges,
    COUNT (ticket_charges) cnt_ticket_charges,
    SUM (recovered_ticket_charges) recovered_ticket_charges,
    COUNT (recovered_ticket_charges) cnt_recovered_ticket_charges
    FROM us_datamart_raw
    GROUP BY order_date,
    if_system,
    business_dim_id,
    time_dim_id,
    account_dim_id,
    ordertype_dim_id,
    instr_dim_id,
    execution_dim_id,
    exec_exchange_dim_id;
    -- Note: Index I_SNAP$_MV_US_DATAMART will be created automatically
    -- by Oracle with the associated materialized view.
    CREATE UNIQUE INDEX OS_OWNER.MV_US_DATAMART_UDX ON OS_OWNER.MV_US_DATAMART
    (ORDER_DATE, TIME_DIM_ID, BUSINESS_DIM_ID, ACCOUNT_DIM_ID, ORDERTYPE_DIM_ID,
    INSTR_DIM_ID, EXECUTION_DIM_ID, EXEC_EXCHANGE_DIM_ID)
    NOLOGGING
    NOPARALLEL
    COMPRESS 7;
    No of rows: 2228558
    The query (taken Mondrian) I run against each of them is:
    select sum("MV_US_DATAMART"."NOTIONAL") as "m0"
    --, sum("MV_US_DATAMART"."FILLED_QUANTITY") as "m1"
    --, sum("MV_US_DATAMART"."AGENT_CHARGES") as "m2"
    --, sum("MV_US_DATAMART"."CLEARING_CHARGES") as "m3"
    --, sum("MV_US_DATAMART"."EXECUTION_CHARGES") as "m4"
    --, sum("MV_US_DATAMART"."TRANSACTION_CHARGES") as "m5"
    --, sum("MV_US_DATAMART"."ROUTING_CHARGES") as "m6"
    --, sum("MV_US_DATAMART"."ORDER_MANAGEMENT") as "m7"
    --, sum("MV_US_DATAMART"."SETTLEMENT_CHARGES") as "m8"
    --, sum("MV_US_DATAMART"."COMMISSION") as "m9"
    --, sum("MV_US_DATAMART"."RECOVERED_AGENT") as "m10"
    --, sum("MV_US_DATAMART"."RECOVERED_CLEARING") as "m11"
    --,sum("MV_US_DATAMART"."RECOVERED_EXECUTION") as "m12"
    --,sum("MV_US_DATAMART"."RECOVERED_TRANSACTION") as "m13"
    --, sum("MV_US_DATAMART"."RECOVERED_ROUTING") as "m14"
    --, sum("MV_US_DATAMART"."RECOVERED_ORD_MGT") as "m15"
    --, sum("MV_US_DATAMART"."RECOVERED_SETTLEMENT") as "m16"
    --, sum("MV_US_DATAMART"."RECOVERED_TICKET_CHARGES") as "m17"
    --,sum("MV_US_DATAMART"."TICKET_CHARGES") as "m18"
    --, sum("MV_US_DATAMART"."VENDOR_CHARGE") as "m19"
              from "OS_OWNER"."MV_US_DATAMART" "MV_US_DATAMART"
    where I uncomment a column at a time and rerun. I improved the TimesTen results since my first post, by retyping the NUMBER columns to BINARY_FLOAT. The results I got were:
    No Columns     ORACLE     TimesTen     
    1     1.05     0.94     
    2     1.07     1.47     
    3     2.04     1.8     
    4     2.06     2.08     
    5     2.09     2.4     
    6     3.01     2.67     
    7     4.02     3.06     
    8     4.03     3.37     
    9     4.04     3.62     
    10     4.06     4.02     
    11     4.08     4.31     
    12     4.09     4.61     
    13     5.01     4.76     
    14     5.02     5.06     
    15     5.04     5.25     
    16     5.05     5.48     
    17     5.08     5.84     
    18     6     6.21     
    19     6.02     6.34     
    20     6.04     6.75

  • Count (*)  for select stmt take more time than  execute a that sql stmt

    HI
    count (*) for select stmt take more time than execute a that sql stmt
    executing particular select stmt take 2.47 mins but select stmt is using the /*+parallel*/ (sql optimer) in that sql  command for faster execute .
    but if i tried to find out total number of rows in that query it takes more time ..
    almost 2.30 hrs still running to find count(col)
    please help me to get count of row faster.
    thanks in advance...

    797525 wrote:
    HI
    count (*) for select stmt take more time than execute a that sql stmt
    executing particular select stmt take 2.47 mins but select stmt is using the /*+parallel*/ (sql optimer) in that sql  command for faster execute .
    but if i tried to find out total number of rows in that query it takes more time ..
    almost 2.30 hrs still running to find count(col)
    please help me to get count of row faster.
    thanks in advance...That may be because your client is displaying only the first few records when you are running the "SELECT *". But when you run "COUNT(*)", the whole records has to be counted.
    As already mentined please read teh FAQ to post tuning questions.

  • Why this Query is taking much longer time than expected?

    Hi,
    I need experts support on the below mentioned issue:
    Why this Query is taking much longer time than expected? Sometimes I am getting connection timeout error. Is there any better way to achieve result in shortest time.  Below, please find the DDL & DML:
    DDL
    BHDCollections
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BHDCollections](
     [BHDCollectionid] [bigint] IDENTITY(1,1) NOT NULL,
     [GroupMemberid] [int] NOT NULL,
     [BHDDate] [datetime] NOT NULL,
     [BHDShift] [varchar](10) NULL,
     [SlipValue] [decimal](18, 3) NOT NULL,
     [ProcessedValue] [decimal](18, 3) NOT NULL,
     [BHDRemarks] [varchar](500) NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_BHDCollections] PRIMARY KEY CLUSTERED
     [BHDCollectionid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    BHDCollectionsDet
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    CREATE TABLE [dbo].[BHDCollectionsDet](
     [CollectionDetailid] [bigint] IDENTITY(1,1) NOT NULL,
     [BHDCollectionid] [bigint] NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](18, 3) NOT NULL,
     [Quantity] [int] NOT NULL,
     CONSTRAINT [PK_BHDCollectionsDet] PRIMARY KEY CLUSTERED
     [CollectionDetailid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    Banks
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Banks](
     [Bankid] [int] IDENTITY(1,1) NOT NULL,
     [Bankname] [varchar](50) NOT NULL,
     [Bankabbr] [varchar](50) NULL,
     [BankContact] [varchar](50) NULL,
     [BankTel] [varchar](25) NULL,
     [BankFax] [varchar](25) NULL,
     [BankEmail] [varchar](50) NULL,
     [BankActive] [bit] NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_Banks] PRIMARY KEY CLUSTERED
     [Bankid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    Groupmembers
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[GroupMembers](
     [GroupMemberid] [int] IDENTITY(1,1) NOT NULL,
     [Groupid] [int] NOT NULL,
     [BAID] [int] NOT NULL,
     [Createdby] [varchar](50) NULL,
     [Createdon] [datetime] NULL,
     CONSTRAINT [PK_GroupMembers] PRIMARY KEY CLUSTERED
     [GroupMemberid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_BankAccounts] FOREIGN KEY([BAID])
    REFERENCES [dbo].[BankAccounts] ([BAID])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_BankAccounts]
    GO
    ALTER TABLE [dbo].[GroupMembers]  WITH CHECK ADD  CONSTRAINT [FK_GroupMembers_Groups] FOREIGN KEY([Groupid])
    REFERENCES [dbo].[Groups] ([Groupid])
    GO
    ALTER TABLE [dbo].[GroupMembers] CHECK CONSTRAINT [FK_GroupMembers_Groups]
    BankAccounts
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[BankAccounts](
     [BAID] [int] IDENTITY(1,1) NOT NULL,
     [CustomerID] [int] NOT NULL,
     [Locationid] [varchar](25) NOT NULL,
     [Bankid] [int] NOT NULL,
     [BankAccountNo] [varchar](50) NOT NULL,
     CONSTRAINT [PK_BankAccounts] PRIMARY KEY CLUSTERED
     [BAID] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Banks] FOREIGN KEY([Bankid])
    REFERENCES [dbo].[Banks] ([Bankid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Banks]
    GO
    ALTER TABLE [dbo].[BankAccounts]  WITH CHECK ADD  CONSTRAINT [FK_BankAccounts_Locations1] FOREIGN KEY([Locationid])
    REFERENCES [dbo].[Locations] ([Locationid])
    GO
    ALTER TABLE [dbo].[BankAccounts] CHECK CONSTRAINT [FK_BankAccounts_Locations1]
    Currency
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[Currency](
     [Currencyid] [int] IDENTITY(1,1) NOT NULL,
     [CurrencyISOCode] [varchar](20) NOT NULL,
     [CurrencyCountry] [varchar](50) NULL,
     [Currency] [varchar](50) NULL,
     CONSTRAINT [PK_Currency] PRIMARY KEY CLUSTERED
     [Currencyid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    CurrencyDetails
    SET ANSI_NULLS ON
    GO
    SET QUOTED_IDENTIFIER ON
    GO
    SET ANSI_PADDING ON
    GO
    CREATE TABLE [dbo].[CurrencyDetails](
     [CurDenid] [int] IDENTITY(1,1) NOT NULL,
     [Currencyid] [int] NOT NULL,
     [Denomination] [decimal](15, 3) NOT NULL,
     [DenominationType] [varchar](25) NOT NULL,
     CONSTRAINT [PK_CurrencyDetails] PRIMARY KEY CLUSTERED
     [CurDenid] ASC
    )WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
    ) ON [PRIMARY]
    GO
    SET ANSI_PADDING OFF
    QUERY
    WITH TEMP_TABLE AS
    SELECT     0 AS COINS, BHDCollectionsDet.Quantity AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'Currency') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)
    UNION ALL
    SELECT     BHDCollectionsDet.Quantity AS COINS, 0 AS BN, BHDCollections.BHDDate AS CollectionDate, BHDCollectionsDet.Currencyid,
                          (BHDCollections.BHDCollectionid) AS DSLIPS, Banks.Bankname
    FROM         BHDCollections INNER JOIN
                          BHDCollectionsDet ON BHDCollections.BHDCollectionid = BHDCollectionsDet.BHDCollectionid INNER JOIN
                          GroupMembers ON BHDCollections.GroupMemberid = GroupMembers.GroupMemberid INNER JOIN
                          BankAccounts ON GroupMembers.BAID = BankAccounts.BAID INNER JOIN
                          Currency ON BHDCollectionsDet.Currencyid = Currency.Currencyid INNER JOIN
                          CurrencyDetails ON Currency.Currencyid = CurrencyDetails.Currencyid INNER JOIN
                          Banks ON BankAccounts.Bankid = Banks.Bankid
    GROUP BY BHDCollectionsDet.Quantity, BHDCollections.BHDDate, BankAccounts.Bankid, BHDCollectionsDet.Currencyid, CurrencyDetails.DenominationType,
                          CurrencyDetails.Denomination, BHDCollectionsDet.Denomination, Banks.Bankname,BHDCollections.BHDCollectionid
    HAVING      (BHDCollections.BHDDate BETWEEN @FromDate AND @ToDate) AND (BankAccounts.Bankid = @Bankid) AND (CurrencyDetails.DenominationType = 'COIN') AND
                          (CurrencyDetails.Denomination = BHDCollectionsDet.Denomination)),
    TEMP_TABLE2 AS
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM TEMP_TABLE Group By CollectionDate,DSLIPS,Bankname
    SELECT CollectionDate,Bankname,count(DSLIPS) AS DSLIPS,sum(BN) AS BN,sum(COINS) AS coins FROM TEMP_TABLE2 Group By CollectionDate,Bankname
    HAVING COUNT(DSLIPS)<>0;

    Without seeing an execution plan of the query it is hard to suggest something useful. Try insert the result of UNION ALL to the temporary table and then perform an aggregation on that table, not a CTE.
    Just
    SELECT CollectionDate,Bankname,DSLIPS AS DSLIPS,SUM(BN) AS BN,SUM(COINS)AS COINS  FROM
    #tmp Group By CollectionDate,DSLIPS,Bankname
    HAVING COUNT(DSLIPS)<>0;
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Exe run more times by click icon just like VB program?

    exe made by labview can run more times just like VB program?
    I have completed  the program of display the state of my machine through serial port. The program was designed as show only one machine's state, and if more than one machine's state needs to be displayed, i hope the program can run duplicately or triplicately. Can it be realized?
    Can dynamically called subvi(set ad reentrant  vi) solve my problem? 
    When i try to  run the same subvi with invoke (run vi method),the "Error 1000 occurred at Invoke Node" appeared.
    If the dynamically called subvi  can do, subvis of subvi need to be set as reentrant too? subvis of subvi don't need to share information between different machine.

    The 'subsubVIs' may or may not be a problem. It depends greatly on what they are and what they do. In some cases, making them re-entrant will help. You should read the shipping document called Using LabVIEW to Create Multithreaded VIs for Maximum Performance and Reliability. There is a section in there on re-entrant VIs. There also have been numerous threads in this forum on the subject.

  • My Library has ten more songs than my iPod. How can I find out which songs are not syncing?

    My Library has ten more songs than my iPod. How can I find out which songs are not syncing?
    I recently had to purchase a new hard drive. Because the size of my music collection exceeds 140 GB, I stored all of my music on a separate external hard drive. When I exported everything back into my new iTunes library, I found that I had about 200 more songs than I had in the previous library. After deleting most of those extraneous songs, I was able to get my library close to where it was before. However, after syncing to my iPod Classic, I noticed that only 34,853 of the 34,863 loaded. How can I figure out which of those ten songs in my library did not sync to my iPod?
    Yellow12

    It may be that the 10 songs are unchecked. To find them select songs view.
    If there is a column Heading marked with a tick then click on it to sort by this column.
    If the Column is not there then View > View Options and select the Checked column which will then display
    Check the songs by putting ticks in the empty boxes and resynch. Now those tracks should go over as well

  • How to add extra options for style of images.   iPad version has far more options than Mac version

    How to add extra options for style of images in pages.   iPad version has far more options than Mac version

    Thank you, you made me go look again and I realised the ones I use on the iPad are under Borders not style and they are on Mac also.  I feel a bit silly but I wouldn't have got there without your prompting.  Do you mean by "create your own"  using the borders and then saving as a style or can you create something that is not in either?
    Thanks again

  • Error in SQL Query The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator. for the query

    hi Experts,
    while running SQL Query i am getting an error as
    The text, ntext, and image data types cannot be compared or sorted, except when using IS NULL or LIKE operator. for the query
    select  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price ,
    T2.LineText
    from OQUT T0  INNER JOIN QUT1 T1 ON T0.DocEntry = T1.DocEntry INNER JOIN
    QUT10 T2 ON T1.DocEntry = T2.DocEntry where T1.DocEntry='590'
    group by  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price
    ,T2.LineText
    how to resolve the issue

    Dear Meghanath,
    Please use the following query, Hope your purpose will serve.
    select  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price ,
    CAST(T2.LineText as nvarchar (MAX))[LineText]
    from OQUT T0  INNER JOIN QUT1 T1 ON T0.DocEntry = T1.DocEntry LEFT OUTER JOIN
    QUT10 T2 ON T1.DocEntry = T2.DocEntry --where T1.DocEntry='590'
    group by  T1. Dscription,T1.docEntry,T1.Quantity,T1.Price
    ,CAST(T2.LineText as nvarchar (MAX))
    Regards,
    Amit

  • ERROR: Updatable SQL Query already exists on page 20.

    Hello,
    I created a tabular form, then added a where clause and started getting the following error:
    Error in mru internal routine: ORA-20001: no data found in tabular form
    So I deleted all components of the tabular form (region, buttons, branches and process) and started building it again from scratch. This is a strategy that has worked for me a number of times in HTML DB. Unfortunately now I get the following error when trying to build a new tabular form:
    Updatable SQL Query already exists on page 20.You can only add one updatable SQL query per page. Select a different page.
    What is the logic that throws this error. I'm 99% convinced I've cleared the page of any updatable SQL, is there something on the application level I should also clear out?
    Thanks!

    Ignore this, problem solved. It seems I do have SQL reports that are marked as updatable even though they are just plain reports.
    Sorry.

  • Error in SQl Query - SQl Command not properly ended

    Hi All
    I have this SQL query that returns the following error when I run it in TOAD:
    SELECT
    VC.CAMPAIGN_NUMBER,VC.CAMPAIGN_TITLE,VC.CAMPAIGN_DESC,
    VCV.START_DATE, VCV.END_DATE,VC.CAMPAIGN_TYPE,VC.APPLICABILITY,
    VC.CAMPAIGN_PRIORITY
         FROM
         VM_CAMPAIGN_VIN VCV ,VM_CAMPAIGN VC
         WHERE
              VCV.VIN = 'US'
              AND
    VCV.CAMPAIGN_NUMBER = VC.CAMPAIGN_NUMBER AND VC.COUNTRY_CODE = 'E'
              AND VC.LANGUAGE_CODE = 'L' AND VC.CAMPAIGN_TYPE = null
    AND SYSDATE BETWEEN VCV.START_DATE AND VCV.END_DATE
    AND SYSDATE BETWEEN VC.START_DATE AND VC.END_DATE)
    A ORDER BY A.CAMPAIGN_PRIORITY DESC, A.END_DATE
    The error is:
    SQl Command not properly ended
    Any help is highly appreciated. Thanks

    Thanks a lot to everyone. It helped me run the query without any problem. Now I have another issue. This may not be the right place to post this question, I think.My apologies for that. The problem is, Weblogic posts an error as follows:
    java.sql.SQLException: ORA-00923: FROM keyword not found where expected
    I am wondering if this is an Java related error or an SQL related error.
    Well, I am using the SQL Statement which you helped debug, and it is inside something called "XXSQLConstants.java consisting of the following SQL statement:
    public static final String XX_ALL_CAMPS_SELECT = "SELECT * FROM" +
         "(SELECT vc.campaign_number, vc.campaign_title, vc.campaign_desc, vc.start_date," +
         "vc.end_date, vc.campaign_type, vc.applicability, vc.campaign_priority" +
    "FROM vm_campaign vc" +
    "WHERE vc.applicability = 'Y'" +
    "AND vc.country_code = ? " +
    "AND vc.language_code = ? " +
    "AND vc.campaign_type = ? "+
    "AND SYSDATE BETWEEN vc.start_date AND vc.end_date" +
    "AND NOT EXISTS (" +
    "SELECT 'X'" +
    "FROM vm_campaign_vin vcv" +
    "WHERE (vcv.vin = ? " +
    "AND vcv.campaign_number = vc.campaign_number" +
    "))" +
         "UNION" +
         "SELECT vc.campaign_number, vc.campaign_title, vc.campaign_desc," +
         "vcv.start_date, vcv.end_date, vc.campaign_type, vc.applicability," +
    "vc.campaign_priority" +
    "FROM vm_campaign_vin vcv, vm_campaign vc" +
    "WHERE vcv.vin = ? " +
    "AND vcv.campaign_number = vc.campaign_number" +
    "AND vc.country_code = ? " +
    "AND vc.language_code = ? " +
    "AND vc.campaign_type IS NULL" +
    "AND SYSDATE BETWEEN vcv.start_date AND vcv.end_date" +
    "AND SYSDATE BETWEEN vc.start_date AND vc.end_date)";
    The SQl runs fine when tested (well, it does not return any data for the rows returned, but there are no errors), but in my application server I get the following error (pointing out the same SQL code pasted above:
    java.sql.SQLException: ORA-00923: FROM keyword not found where expected
    Any suggestions? Thanks in advance. I appreciate all replies

  • JSTL sql:query date field has zero time part

    I have a little JSP where I am using JSTL to do a query against a table "ph_application" that has a date field "rundate". This field has many different dates and times in it.
    Here is the code snippet:
    <sql:setDataSource url="jdbc:oracle:thin:@oraprd02:1521:ipp4" user="site" password="pmc_site"/>
    <sql:query var="application" sql="select * from ph_application"/>
    <c:forEach items="${application.rows}" var="row">
    <tr>
    <td><c:out value="${row.name}" default=" " escapeXml="false"/></td>
    <td><fmt:formatDate value="${row.rundate}" type="both"/></td>
    </tr>
    </c:forEach>
    The resulting date/time output always has a 12:00:00 AM time regardless of what is actually in the table. I have changed the fmt:formatDate to a c:out of the row.rundate.time value and it appears that the time part of the date is actually zero at this point.
    Is there an inconsistancy between the oracle date type and what JSTL is expecting (somewhere)?
    Any help and/or verification is appreciated.
    Richard

    I'm running this from within JDeveloper 10g.

Maybe you are looking for

  • Process for porting phone # to FIOS?

    Can someone point me to instructions on how to port my # from Comcast/Xfinity to FIOS?  We had FIOS for years until five weeks ago, switched to Xfinity and ported the home # I've had for 10 years, then switched back to FIOS yesterday.  During the ord

  • Network activities deletion Problem

    Dear All, I have a requirement where in I have to delete multiple activities from a network using a Z transaction and then renumber the existing activity numbers. For deleting the activities, currently I have used BAPI_NETWORK_MAINTAIN with method u2

  • Invalid Table Name in query

    Hello, I'm trying to create a report on a table name that is in an item. This is the simple query I'm using for the report: Select * from :P9_MERGE_TABLE; I get an Invalid Table Name error. Is there any way I can build the query to get around this pr

  • Best Codec for exporting animations with alpha channel from FCP timeline

    Whats the best codec for exporting animations (hopefully using loseless compression) that will retain alpha channel, and use the current sequence settings for fps and size? Currently Im exporting image sequences, but I'd prefer a wrapper.. I don't kn

  • .tex viewing on the mac?

    i am googling here but only coming up with highly specialized pages. one indicated that Acrobat Pro would read these but it doesn't seem to be working. anyone know how to open these on the mac? tx