Informix / SQL Query to calculate Service Level Met %

Hi,
We have a client running UCCX 7.0 and we are upgrading it to 8.5.
They have a custom .net application that display different stats on IP Phone. I have converted all of them except i am having problem calculating Service Level Met %
Existing SQL query converts boolean into Float but Informix gives error that boolean can't be converted to float or integer.
here is part of existing query (SQl Server) to calculate service level met %
ROUND((SUM(CAST(ssl.metServiceLevel AS FLOAT))/COUNT(CAST(ssl.metServiceLevel AS FLOAT))) * 100,2) AS serviceLevelMet
I will greatly appreciate if anyone can help converting this query into Informix query format.
Thank you

Hi Sonain
Try this, it won't display a service level until you have received at least one call on the CSQ.
SELECT csqsum.CSQName,loggedInAgents,talkingAgents,availableAgents,workingAgents,reservedAgents,servicelevel
FROM  RtCSQsSummary csqSum left outer join
     (SELECT csqname, ROUND(sum(100 * (MetSL)/ (case when (MetSL + NotSL  = 0) then 1 else (MetSL + NotSL) end)) ) as servicelevel
        FROM(
            SELECT
                 trim( csqname) csqname
                ,sum( case cqd.metServiceLevel when 't' then  1 else 0 end)  MetSL
                ,sum( case cqd.metServiceLevel when 't' then 0 else 1 end) NotSL
            FROM ContactServiceQueue csq
                inner join contactqueuedetail cqd  on cqd.targetID = csq.recordID
                inner join contactroutingdetail crd on crd.sessionID = cqd.sessionID
                inner join contactcalldetail ccd on crd.sessionID = ccd.sessionID
            WHERE Date(current) - Date(crd.StartDateTime ) = 0
            GROUP BY csqname
        )z
   GROUP BY z.csqname
    )y
ON y.CSQName = csqsum.CSQName
Please rate helpfull posts.
Graham

Similar Messages

  • SQL Query to Calculate Total PO Amount

    Hi,
    I'm trying to create a Purchase Order view that includes Total Po Amount. Has anyone done this before and can provide me with the SQL? Or advice on how to write the query to calculate the po_line amounts for each po number?
    I've looked in the POXPOEPO.fmb form for the calculation(code) and looked at several APPS views(ie. PO_Headers_Inq_V) without any success.
    Any help would be appreciated.
    Thanks
    Tony

    Tony, I think this will help you:
    select sum(pl.unit_price * pl.quantity)
    from po_lines_all pl
    , po_headers_all ph
    where ph.segment1 = 1004257 -- po_number
    and ph.po_header_id = pl.po_header_id
    Regards,
    Ricardo Cabral

  • How to write a sql query to calculate weights using CTE

    Hi guys,
    want some help using a CTE to generate data using recursive SQL - input data in table A to be transformed into table B shown below
    Table A
    Instru_id_index     instru_id_name    instru_id_constit  con_name   weight
            56                       INDEX A                     
    23                  A                 25
            56                       INDEX A                     
    24                 B                  25
            56                       INDEX A                     
    25                  C                 25
            56                       INDEX A                     
    57              
    INDEX  B       25
            57                      
    INDEX B                     31                 
    D                 33
            57                      
    INDEX B                     32                 
    E                  33
            57                      
    INDEX B                     33                 
    F                  33
    (Logic should be recursive in order to be able to handle multi-level, not just level 2.)
    Table B
    Instru_id_index     instru_id_name    instru_id_constit  constit_name   weight
            56                       INDEX A                        
    23                A                   25
            56                       INDEX A                        
    24                B                   25
            56                       INDEX A                        
    25                C                   25
            56                       INDEX A                        
    31                D                   8.3
            56                       INDEX A                        
    32                E                    8.3
            56                       INDEX A                        
    33                F                    8.3
            57                       
    INDEX B                       31                
    D                   33
            57                       
    INDEX B                       32                E                   
    33
            57                       
    INDEX B                       33               
    F                     33
    how can I write a simple CTE construct to  display the data in table B -  How can i calculate the values of weights as 8.3 respectively - calculate these without changing the structure of the tables.
    Can I do this in a CTE

    Full join?
    Anyway, thanks for Rsignh to produces a script with CREATE TABLE and INSERT statements. I've extended the data to one more level of recursion.
    create table weight_tab(
     instrument_id_index int,
     instrument_id_name varchar(10),
     instrument_id_constituent int,
     constituent_name varchar(10),
     [weight] decimal(10,2))
     insert into weight_tab values
     (56,'INDEX A',23,'A',25),(56,'INDEX A', 24,'B',25),
    (56,'INDEX A',25,'C',25),(56,'INDEX A', 57,'INDEX  B',25),
    (57,'INDEX B',31,'D',33), (57,'INDEX B', 32,'INDEX E',33),
    (57,'INDEX B',33,'INDEX C',33),
    (33,'INDEX C',42,'Alfa',60),
    (33,'INDEX C',43,'Beta',40),
    (32,'INDEX C',142,'Gamma',90),
    (32,'INDEX C',143,'Delta',10)
    go
    SELECT * FROM weight_tab
    go
    ; WITH rekurs AS (
        SELECT instrument_id_index, instrument_id_name, instrument_id_constituent,
               cast(weight as float) AS weight, cnt = 1
        FROM   weight_tab a
        WHERE  NOT EXISTS (SELECT *
                           FROM   weight_tab b
                           WHERE  b.instrument_id_constituent = a.instrument_id_index)
        UNION ALL
        SELECT r.instrument_id_index, r.instrument_id_name, w.instrument_id_constituent,
               r.weight * w.weight / 100, r.cnt  + 1
        FROM   rekurs r
        JOIN   weight_tab w ON r.instrument_id_constituent = w.instrument_id_index
        WHERE r.cnt < 4
    SELECT instrument_id_index, instrument_id_name, instrument_id_constituent,
           cast(weight AS decimal(10,2))
    FROM   rekurs
    go
    DROP TABLE weight_tab
    Erland Sommarskog, SQL Server MVP, [email protected]

  • SQL Query to calculate on-time dispatch with a calendar table

    Hi Guys,
    I have a query (view) to calculate orders' fulfillment leadtimes.
    The current criteria exclude week-ends but not bank holidays therefore I have created a calendar table with a column name
    isBusinessDay but I don't know how to best use this table to calculate the On-Time metric. I have been looking everywhere but so far I have been unable to solve my problem.
    Please find below the current calculation for the On-Time Column:
    SELECT
    Week#
    , ClntGroup
    , CORD_DocumentCode
    , DESP_DocumentCode
    , Cord_Lines --#lines ordered
    , CORD_Qty --total units orderd
    , DESP_Lines --#lines dispatched
    , DESP_Qty --total units dispatched
    , Status
    , d_status
    , OpenDate --order open date
    , DateDue
    , DESP_PostedDate --order dispatched date
    , DocType
    , [Lead times1]
    , [Lead times2]
    , InFxO
    , OnTime
    , InFxO + OnTime AS InFullAndOneTime
    , SLADue
    FROM (
    SELECT
    DATEPART(WEEK, d.DateOpn) AS Week#
    , Clients.CustCateg
    , Clients.ClntGroup
    , d.DocumentCode AS CORD_DocumentCode
    , CDSPDocs.DocumentCode AS DESP_DocumentCode
    , COUNT(CORDLines.Qnty) AS Cord_Lines
    , SUM(CORDLines.Qnty) AS CORD_Qty
    , COUNT(CDSPLines.Qnty) AS DESP_Lines
    , SUM(CDSPLines.Qnty) AS DESP_Qty
    , CDSPLines.Status
    , d.Status AS d_status
    , d.OpenDate
    , d.DateDue
    , CDSPDocs.PostDate AS DESP_PostedDate
    , d.DocType
    , DATEDIFF(DAY, d.OpenDate, d.DateDue) AS [Lead times1]
    , DATEDIFF(DAY, d.OpenDate, CDSPDocs.PostDate) AS [Lead times2]
    , CASE WHEN SUM(CORDLines.Qnty) = SUM(CDSPLines.Qnty) THEN 1 ELSE 0 END AS InFxO --in-full
    --On-Time by order according to Despatch SLAs
    , CASE
    WHEN Clients.ClntGroup IN ('Local Market', 'Web Sales', 'Mail Order')
    AND (DATEDIFF(DAY, d.OpenDate, CDSPDocs.PostDate) - (DATEDIFF(WEEK, d.OpenDate, CDSPDocs.PostDate) * 2 ) <= 2)
    THEN 1
    WHEN Clients.ClntGroup IN ('Export Market', 'Export Market - USA')
    AND (DATEDIFF(DAY, d.OpenDate, CDSPDocs.PostDate) - (DATEDIFF(WEEK, d.OpenDate, CDSPDocs.PostDate) * 2) <= 14)
    THEN 1
    WHEN Clients.ClntGroup = 'Export Market' OR Clients.CustCateg = 'UK Transfer'
    AND d.DateDue >= CDSPDocs.PostDate
    THEN 1
    ELSE 0
    END AS OnTime
    --SLA Due (as a control)
    , CASE
    WHEN Clients.ClntGroup IN ('Local Market', 'Web Sales','Mail Order') AND CDSPDocs.PostDate is Null
    THEN DATEADD(DAY, 2 , d.OpenDate)
    WHEN Clients.ClntGroup IN ('Export Market', 'Export Market - UK', 'Export Market - USA') OR (Clients.CustCateg = 'UK Transfer')
    AND CDSPDocs.PostDate IS NULL
    THEN DATEADD (DAY, 14 , d.OpenDate)
    ELSE CDSPDocs.PostDate
    END AS SLADue
    FROM dbo.Documents AS d
    INNER JOIN dbo.Clients
    ON d.ObjectID = dbo.Clients.ClntID
    AND Clients.ClientName NOT in ('Samples - Free / Give-aways')
    LEFT OUTER JOIN dbo.DocumentsLines AS CORDLines
    ON d.DocID = CORDLines.DocID
    AND CORDLines.TrnType = 'L'
    LEFT OUTER JOIN dbo.DocumentsLines AS CDSPLines
    ON CORDLines.TranID = CDSPLines.SourceID
    AND CDSPLines.TrnType = 'L'
    AND (CDSPLines.Status = 'Posted' OR CDSPLines.Status = 'Closed')
    LEFT OUTER JOIN dbo.Documents AS CDSPDocs
    ON CDSPLines.DocID = CDSPDocs.DocID
    LEFT OUTER JOIN DimDate
    ON dimdate.[Date] = d.OpenDate
    WHERE
    d.DocType IN ('CASW', 'CORD', 'MORD')
    AND CORDLines.LneType NOT IN ('Fght', 'MANF', 'Stor','PACK', 'EXPS')
    AND CORDLines.LneType IS NOT NULL
    AND d.DateDue <= CONVERT(date, GETDATE(), 101)
    GROUP BY
    d.DateOpn
    ,d.DocumentCode
    ,Clients.CustCateg
    ,CDSPDocs.DocumentCode
    ,d.Status
    ,d.DocType
    ,d.OpenDate
    ,d.DateReq
    ,CDSPDocs.PostDate
    ,CDSPLines.Status
    ,Clients.ClntGroup
    ,d.DocumentName
    ,d.DateDue
    ,d.DateOpn
    ) AS derived_table
    Please find below the DimDate table
    FullDateNZ HolidayNZ IsHolidayNZ IsBusinessDay
    24/12/2014 NULL 0 1
    25/12/2014 Christmas Day 1 0
    26/12/2014 Boxing Day 1 0
    27/12/2014 NULL 0 0
    28/12/2014 NULL 0 0
    29/12/2014 NULL 0 1
    30/12/2014 NULL 0 1
    31/12/2014 NULL 0 1
    1/01/2015 New Year's Day 1 0
    2/01/2015 Day after New Year's 1 0
    3/01/2015 NULL 0 0
    4/01/2015 NULL 0 0
    5/01/2015 NULL 0 1
    6/01/2015 NULL 0 1
    This is what I get from the query:
    Week# ClntGroup CORD_DocumentCode OpenDate DESP_PostedDate OnTime
    52 Web Sales 123456 24/12/2014 29/12/2014 0
    52 Web Sales 123457 24/12/2014 30/12/2014 0
    52 Web Sales 123458 24/12/2014 29/12/2014 0
    52 Local Market 123459 24/12/2014 29/12/2014 0
    1 Web Sale 123460 31/12/2014 5/01/2015 0
    1 Local Market 123461 31/12/2014 6/01/2015 0
    As the difference between the dispatched and open date is 2 business days or less, the result I expect is this:
    Week# ClntGroup CORD_DocumentCode OpenDate DESP_PostedDate OnTime
    52 Web Sales 123456 24/12/2014 29/12/2014 1
    52 Web Sales 123457 24/12/2014 30/12/2014 1
    52 Web Sales 123458 24/12/2014 29/12/2014 1
    52 Local Market 123459 24/12/2014 29/12/2014 1
    1 Web Sale 123460 31/12/2014 5/01/2015 1
    1 Local Market 123461 31/12/2014 6/01/2015 1
    I am using SQL Server 2012
    Thanks
    Eric

    >> The current criteria exclude week-ends but not bank holidays therefore I have created a calendar table with a column name “isBusinessDay” but I don't know how to best use this table to calculate the On-Time metric. <<
    The Julian business day is a good trick. Number the days from whenever your calendar starts and repeat a number for a weekend or company holiday.
    CREATE TABLE Calendar
    (cal__date date NOT NULL PRIMARY KEY, 
     julian_business_nbr INTEGER NOT NULL, 
    INSERT INTO Calendar 
    VALUES ('2007-04-05', 42), 
     ('2007-04-06', 43), -- good Friday 
     ('2007-04-07', 43), 
     ('2007-04-08', 43), -- Easter Sunday 
     ('2007-04-09', 44), 
     ('2007-04-10', 45); --Tuesday
    To compute the business days from Thursday of this week to next
     Tuesdays:
    SELECT (C2.julian_business_nbr - C1.julian_business_nbr)
     FROM Calendar AS C1, Calendar AS C2
     WHERE C1.cal__date = '2007-04-05',
     AND C2.cal__date = '2007-04-10'; 
    We do not use flags in SQL; that was assembly language. I see from your code that you are still in a 1960's mindset. You used camelCase for a column name! It makes the eyes jump and screws up maintaining code; read the literature. 
    The “#” is illegal in ANSI/ISO Standard SQL and most other ISO Standards. You are writing 1970's Sybase dialect SQL! The rest of the code is a mess. 
    The one column per line, flush left and leading comma layout tells me you used punch cards when you were learning programming; me too! We did wrote that crap because a card had only 80 columns and uppercase only IBM 027 character sets. STOP IT, it make you
    look stupid in 2015. 
    You should follow ISO-11179 rules for naming data elements. You failed. You believe that “status” is a precise, context independent data element name! NO! 
    You should follow ISO-8601 rules for displaying temporal data. But you used a horrible local dialect. WHY?? The “yyyy-mm-dd” is the only format in ANSI/ISO Standard SQL. And it is one of the most common standards embedded in ISO standards. Then you used crap
    like “6/01/2015” which is varying length and ambiguous that might be “2015-06-01” or “2015-01-06” as well as not using dashes. 
    We need to know the data types, keys and constraints on the table. Avoid dialect in favor of ANSI/ISO Standard SQL. 
    And you need to read and download the PDF for: 
    https://www.simple-talk.com/books/sql-books/119-sql-code-smells/
    >> Please find below the current calculation for the On-Time Column: <<
    “No matter how far you have gone down the wrong road, turn around”
     -- Turkish proverb
    Can you throw out this mess and start over? If you do not, then be ready to have performance got to hell? You will have no maintainable code that has to be kludged like you are doing now? In a good schema an OUTER JOIN is rare, but you have more of them in
    ONE statement than I have seen in entire major corporation databases.
    --CELKO-- Books in Celko Series for Morgan-Kaufmann Publishing: Analytics and OLAP in SQL / Data and Databases: Concepts in Practice Data / Measurements and Standards in SQL SQL for Smarties / SQL Programming Style / SQL Puzzles and Answers / Thinking
    in Sets / Trees and Hierarchies in SQL

  • Sql query to calculate total credit having more than one tables

    i have these 9 tables and want to calculate totalcredit from these tables alse want to
    See more: SQL-server-2008R2
    i have these 9 tables and want to calculate totalcredit from these tables alse want to calculate totaldebit of all account no in which each accountno will show its own total credit and total debit 
    parties
    purchase 
    purchase body
    purchase return
    purchase return body
    sale 
    sale body
    sale return
    sale return body

    If you want to suggest you accurate solution, please post CREATE TABLE+ INSERT INTO + sample data + DESIRED RESULT
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • SQl Query to Calculate No of Days from LastDate and PreviousDate

    Hi All,
    Below is a sample table which holds the rent updates done for properties at any point of time.
    I like the query to show the out put as
    Prop_Code, PreviousRent, PreviousRentUpdateDate, LastRent, LastRentUpdateDate  NoofDays
    1008             206.38           2014-06-16                    
        209.04     2014-12-22               189
    DECLARE @table TABLE
     ( Prop_Code INT
     ,Current_Rent INT
     ,Revised_Rent INT
     ,Rent_Review_Date varchar(10)
     ,Rent_Review_Time DATEtime)
    INSERT INTO @table (PROP_CODE,Current_Rent,Revised_Rent,Rent_Review_Date,Rent_Review_Time) VALUES
    (2977,372,339.15,'2013-07-08','7:44')
    ,(2977,372,339.15,'2013-07-03','11:01')
    ,(2977,372,372,'2014-06-30','9:07')
    ,(2977,372,372,'2014-07-07','11:06')
    ,(2981,372,372,'2014-07-07','11:06')
    ,(2981,372,340.15,'2013-07-08','7:23')
    ,(2981,372,314.15,'2013-07-08','7:44')
    ,(2981,372,340.15,'2013-07-29','7:16')
    ,(3089,205.63,400,'2014-10-27','8:38')
    ,(3089,205.63,205.63,'2014-02-03','8:29')
    ,(3089,205.63,127.64,'2014-01-20','0:52')
    ,(3089,205.63,123.02,'2013-08-12','8:28')
    ,(3089,205.63,205.63,'2014-12-15','8:46')
    ,(3109,252.62,198,'2014-01-20','0:52')
    ,(3109,252.62,252.62,'2014-04-07','8:30')
    ,(3109,252.62,198,'2013-08-12','8:28')
    ,(3117,284.96,336,'2014-04-21','1:03')
    ,(3125,267.53,267.53,'2014-02-03','8:29')
    ,1008,       181.32, '2013-03-19, '04:41')
    ,(1008 ,      186.15, '2013-03-19, '04:41')
    ,(1008 ,      187.62, '2013-03-19, '04:41')
    ,(1008,       191.07, '2013-03-19, '04:41')
    ,(1008,      202.33, '2013-08-12', '08:28')
    ,(1008,       202.53, '2013-11-25', '08:33')
    ,(1008,       206.38, '2014-06-16', '09:38')
    ,(1008,       209.04, '2014-12-22', '07:55')
    Select * from @table
    Regards,
    Jag

    INSERT statement had error and incorrect data. I changed it
    Rent column had INT data type and I changed it to Money.
    DECLARE @table TABLE
    ( Prop_Code INT
    ,Current_Rent Money
    ,Revised_Rent Money
    ,Rent_Review_Date varchar(10)
    ,Rent_Review_Time DATEtime)
    INSERT INTO @table (PROP_CODE,Current_Rent,Revised_Rent,Rent_Review_Date,Rent_Review_Time) VALUES
    (2977,372,339.15,'2013-07-08','7:44')
    ,(2977,372,339.15,'2013-07-03','11:01')
    ,(2977,372,372,'2014-06-30','9:07')
    ,(2977,372,372,'2014-07-07','11:06')
    ,(2981,372,372,'2014-07-07','11:06')
    ,(2981,372,340.15,'2013-07-08','7:23')
    ,(2981,372,314.15,'2013-07-08','7:44')
    ,(2981,372,340.15,'2013-07-29','7:16')
    ,(3089,205.63,400,'2014-10-27','8:38')
    ,(3089,205.63,205.63,'2014-02-03','8:29')
    ,(3089,205.63,127.64,'2014-01-20','0:52')
    ,(3089,205.63,123.02,'2013-08-12','8:28')
    ,(3089,205.63,205.63,'2014-12-15','8:46')
    ,(3109,252.62,198,'2014-01-20','0:52')
    ,(3109,252.62,252.62,'2014-04-07','8:30')
    ,(3109,252.62,198,'2013-08-12','8:28')
    ,(3117,284.96,336,'2014-04-21','1:03')
    ,(3125,267.53,267.53,'2014-02-03','8:29')
    ,(1008, NULL ,181.32, '2013-03-19', '04:41')
    ,(1008 , NULL , 186.15, '2013-03-19', '04:41')
    ,(1008 ,NULL , 187.62, '2013-03-19', '04:41')
    ,(1008, NULL , 191.07, '2013-03-19', '04:41')
    ,(1008, NULL , 202.33, '2013-08-12', '08:28')
    ,(1008, NULL , 202.53, '2013-11-25', '08:33')
    ,(1008, NULL , 206.38, '2014-06-16', '09:38')
    ,(1008, NULL , 209.04, '2014-12-22', '07:55')
    WITH TempCTE
    AS (
    SELECT *
    ,ROW_NUMBER() OVER (
    PARTITION BY Prop_Code ORDER BY Rent_Review_Date DESC
    ) RowNum
    FROM @table
    SELECT a.Prop_Code
    ,a.Revised_Rent
    ,a.Rent_Review_Date
    ,b.Revised_Rent
    ,b.Rent_Review_Date
    ,DATEDIFF(Day, a.Rent_Review_Date, b.Rent_Review_Date)
    FROM TempCTE a
    INNER JOIN TempCTE b ON a.Prop_Code = b.Prop_Code
    AND a.RowNum = b.RowNum + 1
    WHERE b.RowNum = 1
    -Vaibhav Chaudhari

  • Sql query to calculate order completion time

    Hello,
    This is probably a textbook example, but I was wondering if someone could show me the technique to answer the following question based on the sample data:
    Show all orders from the orders table where completion time (defined as difference between time the order was completed and received). is > 5 hours
    Order_id
    Action
    Time
    1
    Received
    1/11/2014 10:12:00
    1
    Logged
    1/11/2014 10:15:00
    1
    Sent
    1/11/2014 15:15:00
    1
    Complete
    1/11/2014 16:15:00
    2
    Received
    31/10/2014 8:10
    2
    Logged
    31/10/2014 8:28
    2
    Sent
    31/10/2014 10:11
    2
    Complete
    31/10/2014 12:13
    3
    Received
    30/10/2014 13:10
    3
    Logged
    30/10/2014 15:10
    3
    Sent
    30/10/2014 16:10
    3
    Complete
    30/10/2014 17:10
    Thanks

    create table orders (order_id int, action varchar(50), [time] datetime)
    insert into orders values(1,'Received','1/11/2014 10:12'),(1,'Logged','1/11/2014 10:15'),(1,'Sent','1/11/2014 15:15'),(1,'Complete','1/11/2014 16:15'),
    (2,'Received','10/31/2014 08:10'),(2,'Logged','10/31/2014 08:28'),(2,'Sent','10/31/2014 10:11'),(2,'Complete','10/31/2014 12:13'),
    (3,'Received','10/30/2014 13:10'),(3,'Logged','10/30/2014 15:10'),(3,'Sent','10/30/2014 16:10'),(3,'Complete','10/30/2014 17:10')
    select order_id from (
    select *, datediff(hour,lag([time],1) Over(partition by order_id Order by [time]),[time]) diff
    from orders WHERE action IN ('Complete','Received')) t
    WHERE diff>=5
    drop table orders

  • SQL Query statists?

    I'd like to know if there is any easy, and convenient way, for someone to execute an SQL query an calculate, these measurments relative to the query:
    1) The I/O performed
    2)Number Read I/O
    3) Number of Write out I/O
    (such that 2+3 = 1
    4) Number of buffered reads
    5)Query Execution time
    6) Query CPU usage
    I've heard mention of such statisctis in views such as V$OSSTAT, etc.
    But these views give the current values, and not the specific cumulative values. Such as: cummulative CPU usage time since start of the query; cumulative I/O since query begin, etc...
    What is the right approach to this. Is it through the V$SESSION view? Would you about it by storing the V$SESSIOn values before the query, you run the query, and get the new V$SESSION values?

    Well, actually i stayed here a little longer to try you part 2 of the manual.
    It worked fine, following comes the output, originating from a spool file, of my first experiment:
    Connected.
    SQL> set timing on trimspool on linesize 250 pagesize 999
    SQL>
    SQL> -- system environment can be checked with:
    SQL> -- show parameter statis
    SQL> -- this show a series of parameters related to statistics
    SQL>
    SQL> -- this setting can influence your sorting
    SQL> -- in particular if an index can satisfy your sort order
    SQL> -- alter session set nls_language = 'AMERICAN';
    SQL>
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set arraysize 15 termout off
    SQL>
    SQL> spool diag2.log
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    PLAN_TABLE_OUTPUT
    SQL_ID  b4j5rmwug3u8p, child number 0
    SELECT USRID, FAVF FROM  (SELECT ID as USRID, FAVF1, FAVF2, FAVF3,
    FAVF4, FAVF5   FROM PROFILE) P UNPIVOT  (FAVF FOR CNAME IN   ( FAVF1,
    FAVF2, FAVF3, FAVF4, FAVF5)) FAVFRIEND
    Plan hash value: 888567555
    | Id  | Operation           | Name    | Starts | E-Rows | A-Rows |   A-Time   |
    Buffers |
    |   0 | SELECT STATEMENT    |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |*  1 |  VIEW               |         |      1 |      5 |      5 |00:00:00.01 |
          8 |
    |   2 |   UNPIVOT           |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |   3 |    TABLE ACCESS FULL| PROFILE |      1 |      1 |      1 |00:00:00.01 |
          8 |
    Predicate Information (identified by operation id):
       1 - filter("unpivot_view_013"."FAVF" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    26 rows selected.
    Elapsed: 00:00:00.14
    SQL>
    SQL> spool off
    SQL>
    SQL> exit
    Disconnected from Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - Pr
    oduction
    With the OLAP, Data Mining and Real Application Testing options
    C:\Documents and Settings\Administrator\My Documents\scripts\oracle\99templates_
    autotrace>my_part2_template.bat
    C:\Documents and Settings\Administrator\My Documents\scripts\oracle\99templates_
    autotrace>sqlplus /NOLOG @my_part2_template.sql
    SQL*Plus: Release 11.1.0.7.0 - Production on Qui Jul 9 22:00:39 2009
    Copyright (c) 1982, 2008, Oracle.  All rights reserved.
    Connected.
    SQL> set timing on trimspool on linesize 250 pagesize 999
    SQL>
    SQL> -- system environment can be checked with:
    SQL> -- show parameter statis
    SQL> -- this show a series of parameters related to statistics
    SQL>
    SQL> -- this setting can influence your sorting
    SQL> -- in particular if an index can satisfy your sort order
    SQL> -- alter session set nls_language = 'AMERICAN';
    SQL>
    SQL>
    SQL> rem Set the ARRAYSIZE according to your application
    SQL> set arraysize 15 termout off
    SQL>
    SQL> spool diag2.log
    SQL>
    SQL> select * from table(dbms_xplan.display_cursor(null, null, 'ALLSTATS LAST'))
    PLAN_TABLE_OUTPUT
    SQL_ID  b4j5rmwug3u8p, child number 0
    SELECT USRID, FAVF FROM  (SELECT ID as USRID, FAVF1, FAVF2, FAVF3,
    FAVF4, FAVF5   FROM PROFILE) P UNPIVOT  (FAVF FOR CNAME IN   ( FAVF1,
    FAVF2, FAVF3, FAVF4, FAVF5)) FAVFRIEND
    Plan hash value: 888567555
    | Id  | Operation           | Name    | Starts | E-Rows | A-Rows |   A-Time   |
    Buffers |
    |   0 | SELECT STATEMENT    |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |*  1 |  VIEW               |         |      1 |      5 |      5 |00:00:00.01 |
          8 |
    |   2 |   UNPIVOT           |         |      1 |        |      5 |00:00:00.01 |
          8 |
    |   3 |    TABLE ACCESS FULL| PROFILE |      1 |      1 |      1 |00:00:00.01 |
          8 |
    Predicate Information (identified by operation id):
       1 - filter("unpivot_view_013"."FAVF" IS NOT NULL)
    Note
       - dynamic sampling used for this statement
    26 rows selected.
    Elapsed: 00:00:00.01
    SQL>
    SQL> spool off
    SQL>
    SQL>
    SQL> -- rem End of Part 2
    SQL> show parameter statis
    NAME                                 TYPE        VALUE
    optimizer_use_pending_statistics     boolean     FALSE
    statistics_level                     string      ALL
    timed_os_statistics                  integer     5
    timed_statistics                     boolean     TRUE
    SQL> quitIf you notice, at the end of the execution I print my statistics session environment. The statistics_level was set to ALL, as you advisied. But the output I obtained seems a lot more incomplete than the one I got from using the autotrace feature.
    Am I missing something. Could it have something to do with the fact that I am running as system and not as sysdba? System shoul have enough permissions to access its session environment statistic values.
    May be it's just a language issue (I'm not a native speaker either) but your understanding of Oracle's read consistency model seems to be questionable.No, you could be right; my understanding is questionable indeed. I am familiar with general concepts of concurrency.
    Things like: Read uncommited data:
    T1 Writes A; T2 Reads A -> Here is a conflict
    This enough for you to not be able to guarantee that the execution is serializable.
    T1 Reads A, T2 Writes A and commits, T1 Reads A - You get another confli, the Unrepeatable read.
    And so on.
    I am also familiar with the different levels of atomicity that databse systems in general give you.
    Conflict Serializable, normally implemented by using the strict phase locking mechanism.
    Repeatable Reads, you lock the rows you access during a transaction. You are guaranteed that those data values you access do not change value; but other entires in the table could be put.
    Unrepeatable reads. Only the data you modify is guaranteed to stay the same. Only you write locks are kept throughout the transaction. And so on.
    But anyway...
    What you explained in your post is more or less what I was saying. In you case much more clear than in mine.
    For instance, if a thread T1 reads A; a thread T2 Writes on A
    In oracle, you could have the thread T1 read A again without geting an Unrepeatable Read error. This is strange: in a normal system you directly get an exception telling you that your vision of the system is inconsistent. But in oracel you can do so, because oracle tries to fetch from the Undo Table Space that same data objects consistent with the view of the system you had when you first accessed it. It looks for a block with an an SCN older than the current version SCN. Or something like that. The only problem is that those modified blocks do not stay indefinitely there. Once a transaction commits you have a time bomb in your hands. That is, if you are working with that is not at its most current version.
    But you are quite right, I have not read enough about Oracle concurrency. But I have a good enough understanding for mu current needs.
    I can not know everything, nor do i want to :D.
    My memory is very limited.
    My best regards, and deepest thanks for your time and attention.
    Edited by: user10282047 on Jul 9, 2009 2:41 PM

  • Single SQl Query with different where conditions

    Experts,
    I have a requirement to design a report. Here are the details
    I have Report table layout
    Profit center Gross sales (This Year) Gross Sales (Last Year) % change Year of Year
    The Report has a selection of entering the Start Date.
    I have a single table in oracle which has profit center and Gross Sales Values on daily basis.
    I want to write a single sql query to calculate both Gross Sales current year and Gross Sales Last Year. I can calculate Gross Sales Current Year by putting the where condition for start date = Current Year Date which i pass through report. I want to calculate the Gross Sales Last Year in the Same query by putting the different where condition i.e start date = Last Year date based on the date input.
    I dont know how to put two where conditions in single query for two different columns.
    Any help will be appreciated.
    Thanks in advance
    Regards
    Santosh

    instead of changing your where clause couldn't you just determine the yearly totals from your table and then use the lag statement to get last years total?
    something like this?
    I just made up 10,000 days worth of sales and called it fake table it is supposed to represent a variant of the table you were describing as your base table.
    with fake_table as
    ( select trunc(sysdate + level) the_day,
    level daily_gross_sales
    from dual
    connect by level < 10001
    select yr, year_gross_sale, lag(year_gross_sale) over (order by yr) prev_year_gross_sale,
    (year_gross_sale - lag(year_gross_sale) over (order by yr))/year_gross_sale * 100 percent_change
    from
    (select distinct yr, year_gross_sale from
    select the_day,
    daily_gross_sales,
    extract(year from the_day) yr,
    extract(year from add_months(the_day,12)) next_yr,
    sum(daily_gross_sales) over (partition by extract(year from the_day)) year_gross_sale
    from fake_table
    order by yr
    )

  • Failing to import data using SQL query

    I am trying to import data from a sql query to analysis service in SSDT. i went to model > import from data source >Microsoft SQL Server and i chose from query. I typed my query which works fin in SSMS and when i press import if gives me a error. what
    is going on please help here is the error
    DirectQuery error: All tables used when querying in DirectQuery Mode must be from a single relational Data Source.
    here is my query:
    USE AdventureWorks2012;
    SELECT SalesPersonID, [29484] Brian, [29485] Peter, [29486] Frank, [29518] Sarah, [29519] Janet, [29520] Alice, [29608] Martin, [29609] Patrick, [29610] Wasu, [29780] Samanyika, [29781] Vladmire
    FROM
    SELECT SalesOrderID, SalesPersonID, CustomerID 
    FROM Sales.SalesOrderHeader
    WHERE SalesPersonID IS NOT NULL
    As P PIVOT
    COUNT (SalesOrderID) FOR CustomerID IN ([29484], [29485], [29486], [29518],  [29519], [29520], [29608], [29609], [29610], [29780], [29781])
    AS PVT
    ORDER BY SalesPersonID ASC;

    Hi,
    When importing data from relational data source, the steps are:
    In SQL Server Data Tools (SSDT), click the Model menu, and then click Import
    from Data Source.
    On the Connect to a Data Source page, select the type of database to connect to, and then click Next.
    Follow the steps in the Table Import Wizard. On subsequent pages, you will be able to select specific tables and views or apply filters by using the Select
    Tables and Views page or by creating a SQL query on Specify a SQL Query page.
    As we can see on the steps, we need to select one database, so we needn't to write "USE AdventureWorks2012" on the query.
    Reference:Import from a Relational Data Source
    Regards,
    Charlie Liao
    TechNet Community Support

  • SQL query  which return all the NET SERVICES which are avaiable in tnsname

    hi all
    how to write a sql query which return all the net services which are avaiable in tnsname.ora
    Regards
    s

    Also, tnsnames.ora is stored on the client, and not necessarily on the server; it's possible (and quite likely) that the name I use for a database in my tnsnames.ora could be different from the name you use for the same database; conversely we might use the same name for two different databases.
    Regards Nigel

  • How to calculate IO on SQL Query.

    Hi all,
    Could u please tell me how to calculate IO for specific SQL Query.
    Regards,
    Santosh.

    In what context you are looking the IO consumed for the query? One option you have got is Autotrace,another can be tracing the query and formatting the results using Tkprof.
    Aman....

  • Combine multiple web services with the same SQL query into one

    Hello,
    I would like to ask a question regarding combine multiple similar web services into one. Can you please tell me if it is possible to combine 4-5 web services into one since they built on the same SQL query with 5 different criterias or condition so that the user can enter any of the 5 criterias to populate the data on the form instead of having 5 different web services?
    e.g Query: Select appName, permit#, address, phone, description, type, section, from table where appName = can be 'appName, permit#, address, phone, or description' to populate the rest of the data to the form.
    Does any one have ever done some thing like this in Workbench ES? If so please assist. I know it can be easier to build it in Visual Basic, C#, or dot.net but the requirement is to build it in workbench ES.
    Thanks in advance,
    Han Dao

    If you are querying for Name, PhoneNumber, and SSN, and you queried for all people with a phone number that started with 867, you would have a potentially long list of people.  So to keep track of all of the people, we store each record in XML complex elements.  The root node is just any name you want, and the repeating element is the complex element name. 
    So using the example from above, I'm going to specify the following:
         Root Node: Result
         Repeating Element: Person
    So now when I do a query, my resultXML will look like:
    <Result>
          <Person>
                 <Name>Alex</Name>
                 <PhoneNumber>867-5309</PhoneNumber>
                 <SSN>111-11-1111</SSN>
          </Person>
    </Result>
    If your query returned multiple results (like ours would probably), it would look like:
    <Result>
          <Person>
                 <Name>Alex</Name>
                 <PhoneNumber>867-5309</PhoneNumber>
                 <SSN>111-11-1111</SSN>
          </Person>
          <Person>
                 <Name>Han</Name>
                 <PhoneNumber>867-2169</PhoneNumber>
                 <SSN>222-22-2222</SSN>
          </Person>
    </Result>
    So Result and Person is just to give a little bit of structure to the xml result (containers really).  So you can name them whatever is helpful for you.
    The column name mappings map the query columns (Name, PhoneNumber, SSN) to some node in the XML (Name, PhoneNumber, SSN).  So you don't need to specify which field maps to what in the form.  Just copy the column names to the element name so you have a 1-to-1 naming.  If you want to manipulate the XML a bit though, you could do:
    Column Name               Element
    Name                            YourName
    PhoneNumber                Phone
    SSN                              Secret
    which would then make your xml look like:
    <Result>
          <Person>
                 <YourName>Alex</YourName>
                 <Phone>867-5309</Phone>
                 <Secret>111-11-1111</Secret>
          </Person>
    </Result>
    It lets you change the XML element names to whatever you want. Otherwise by default they take on their column names.
    In your form, you could bind to the WSDL through the Data Connections pane and point it to your web service.  This will then create form elements that you can just drag and drop allowing you to have the information available when the service gets ran.  Once the service is called, you can modify the field's data to get whatever information you need in order to populate other form fields. 
    If that is too confusing, feel free to send me your form (e-mail is on profile page) and I'll add comments to it to show you how to set up the form for the web service call (and also give me the link to your webservice)

  • Need generic dynamic sql query to generate nodes depending on dealer levels

    Input table:
    create table #test(dealerid integer ,dealerlvl integer)
    insert into #test values(1,1)
    insert into #test values(1,2)
    insert into #test values(1,3)
    insert into #test values(1,4)
    insert into #test values(2,1)
    insert into #test values(2,2)
    insert into #test values(2,3)
    insert into #test values(2,4)
    insert into #test values(2,5)
    insert into #test values(2,6)
    go
    create table #test2(dealerid integer,node integer,prntnode integer,dealerlvl integer)
    insert into #test2 values (1,234,124,2)
    insert into #test2 values (1,123,234,1)
    insert into #test2 values (1,238,123,2)
    insert into #test2 values (1,235,238,3)
    insert into #test2 values (1,253,235,4)
    insert into #test2 values (2,21674,124,3)
    insert into #test2 values (2,1233,21674,1)
    insert into #test2 values (2,2144,1233,2)
    insert into #test2 values (2,2354,2144,3)
    insert into #test2 values (2,24353,2354,4)
    insert into #test2 values (2,245213,24353,5)
    insert into #test2 values (2,2213,245213,6)
    Expected result :
    I have two test case here with dealerID1 and dealerID 2 
    Result for DealerID1
    Result needed for DealerID2:
    the levels for dealers might change (Dealer1 has 4 levels, and Dealer 2 has 6 levels) so i need to create an dynamic sql query which lists each node as separate columns depending on the levels. 
    I have hacked the query to give the result I need 
    select a.dealerid,a.node as Lvl1,b.node as lvl2,c.node as lvl3,d.node as lvl4
    from #test2 a 
    join #test2 b on a.node=b.prntnode
    join #test2 c on b.node=c.prntnode
    join #test2 d on c.node=d.prntnode
    where a.dealerid=1 and a.dealerlvl=2
    select  a.dealerid,a.node asLvl1,
    b.node as lvl2,c.node as lvl3,d.node as lvl4,e.node as lvl5,f.node as lvl6--,a.dealerlvl,a.dealerid
    from #test2 a 
    join #test2 b on a.node=b.prntnode
    join #test2 c on b.node=c.prntnode
    join #test2 d on c.node=d.prntnode
    join #test2 e on d.node=e.prntnode
    join #test2 f on e.node=f.prntnode
    where a.dealerid=2 and a.dealerlvl=3
    I am sure there is a better way to do this with dynamic sql. please help.
    Thanks

    -- Dynamic PIVOT
     DECLARE @T AS TABLE(y INT NOT NULL PRIMARY KEY);
    DECLARE
       @cols AS NVARCHAR(MAX),
       @y    AS INT,
       @sql  AS NVARCHAR(MAX);
    -- Construct the column list for the IN clause
     SET @cols = STUFF(
       (SELECT N',' + QUOTENAME(y) AS [text()]
        FROM (SELECT DISTINCT dealerlvl AS y FROM dbo.test2) AS Y
        ORDER BY y
        FOR XML PATH('')),
       1, 1, N'');
    -- Construct the full T-SQL statement
     -- and execute dynamically
     SET @sql = N'SELECT *
     FROM (SELECT dealerid, dealerlvl, node
           FROM dbo.Test2) AS D
       PIVOT(MAX(node) FOR dealerlvl IN(' + @cols + N')) AS P;';
    EXEC sp_executesql @sql;
     GO

  • Concatenate results SQL query and CASE use Report Builder Reporting Services

    I need to concatenate the results from a SQL query that is using CASE.  The query is listed below.  I do not need permitsubtype but I need to concatenate the results from the permittype. 
    I tried deleting the permitsubtype query and it would not run correctly.  Please see the query and diagram below.  Any help is appreciated.
    select  PERMIT_NO
    ,(case when
      ISNULL(PERMITTYPE,'') = ''
      then 'Unassigned'
      else (select LTRIM(RTRIM(PERMITTYPE)))
      END) AS PERMITTYPE
    ,(case when
      ISNULL(PERMITSUBTYPE,'') = ''
      then 'Unassigned'
      else (select LTRIM(RTRIM(PERMITSUBTYPE)))
      END) AS PERMITSUBTYPE
     ,ISSUED
     ,APPLIED
     ,STATUS 
     ,SITE_ADDR 
     ,SITE_APN
     ,SITE_SUBDIVISION
     ,OWNER_NAME
     ,CONTRACTOR_NAME
     ,ISNULL(JOBVALUE,0) AS JOBVALUE
     ,FEES_CHARGED
     ,FEES_PAID
    ,BLDG_SF
    from Permit_Main
    where ISSUED between @FromDate and @ToDate

    Hi KittyCat101,
    As per my understanding, you used case when statement in the query, you do not need to display permitsubtype in the report, but when you tried to delete permitsubtype from the query, it could not run correctly. In order to improve the efficiency of troubleshooting,
    I need to ask several questions:
    “I tried deleting the permitsubtype query and it would not run correctly.” As we can see, it has no effect to delete permitsubtype from the query you provided, could you please provide complete sql query for the report?
    Could you please provide detailed information about the report? I would be appreciated it if you could provide sample data and screenshot of the report.
    Please provide some more detailed information of your requirements.
    This may be a lot of information to ask for at one time. However, by collecting this information now, it will help us move more quickly toward a solution.
    Thanks,
    Wendy Fu

Maybe you are looking for