Rate Comparison query

Table Structure
Po_no number(12)
Item_code varchar210),
rate number(12,3)I want to output like that
Item_code         Current Rate            Previous RateWhere current rate = rate of Max(po_no) and Previous Rate = rate of 2nd Max(po_no)

Maybe NOT TESTED!
select item_code "Item_code",max(max_rate) "Current Rate",min(max_rate) "Previous Rate"
  from (select item_code,max(rate) max_rate
          from (select po_no,item_code,rate,
                       dense_rank() over (partition by po_no order by rate desc) r
                  from your_table
         where r in (1,2)
         group by item_code,r
group by item_codeRegards
Etbin
Edited by: Etbin on 21.2.2011 17:33
returns null Previous Rate when having one value only
select item_code "Item_code",max(max_rate) "Current Rate",min(case r when 2 then max_rate end) "Previous Rate"
  from (select item_code,max(rate) max_rate,r
          from (select po_no,item_code,rate,
                       dense_rank() over (partition by item_code order by rate desc) r
                  from (select 10 po_no,'1' item_code,12.345 rate from dual union all
                        select 10,'1',12.678 from dual union all
                        select 10,'1',12.234 from dual union all
                        select 10,'1',12.789 from dual union all
                        select 10,'2',10.678 from dual union all
                        select 10,'2',10.234 from dual union all
                        select 10,'2',10.789 from dual union all
                        select 10,'3',10.001 from dual
         where r in (1,2)
         group by item_code,r
group by item_codeEdited by: Etbin on 21.2.2011 17:39

Similar Messages

  • Exchange Rate Comparison ( SD )

    Hi All,
    I need to develop an Exchange Rate Comparison Report.
    The selection sciteria should allow for:
    - a range of From and To Currency Codes
    - 2 dates for comparison.
    The Exchange Rate Type would be 'M'.
    I need the Exchange Rate and the Translation Ratio in the output.
    Would Exchange Rate be the field TCURR-UKURS?
    And also how do I calculate the Translation Ratio?
    Any help will highly be appreciated.......

    Find Rate Based on Amounts, Currency Keys and Date
    A translation rate is determined from the amounts entered. As the rate
    is also dependent on the units of each individual currency, (e.g. a
    DEM/ITL rate of 1.67 with the units 1 DEM and 1000 ITL means that 1 DEM
    is equal to 1670 ITL), table TCURR must also be read before the rate can
    be established. For this, you need both currency keys, an exchange rate
    type and a validity date. The factors determined in this way are
    transferred to the calling program along with the determined rate. If
    exchange rate fixing is defined for exchange rate type TYPE_OF_RATE,
    this information is transferred to the calling program. If one amount is
    calculated from the other even where a fixed exchange rate is used, this
    amount is returned instead of the rate calculated.
    Example call-up:
       CALL FUNCTION 'CALCULATE_EXCHANGE_RATE'
         EXPORTING   DATE             = BKPF-WWERT
                     FOREIGN_AMOUNT   = BSEG-WRBTR
                     FOREIGN_CURRENCY = BKPF-WAERS
                     LOCAL_AMOUNT     = BSEG-DMBTR
                     LOCAL_CURRENCY   = T001-WAERS
                     TYPE_OF_RATE     = 'M'
    IMPORTING 
               EXCHANGE_RATE    = KURS
                 FOREIGN_FACTOR   = FAKTOR-F
                 LOCAL_FACTOR     = FAKTOR-L
    Exceptions 
                 NO_RATE_COMPUTABLE = 4
                 NO_RATE_FOUND      = 8
                 NO_FACTORS_FOUND   = 12.
    try this...with date taking from the select-options and pass it to every time in a loop.

  • How i can give date in each input for applying the exchange rate in Query.

    Hi Gurus,
    We have a requirement to create some currency conversion queries. In the selection screen user should be able to give four inputs. Like given below
    Input 1.          a)  key figures
                            b) Fiscal Year
                            c) Fiscal Period
                            d) Exchange Rate Type
                            e) Date (Exchange rate will be applied which is applicable on the given date)
      Input 2.         a)  key figures
                            b) Fiscal Year
                            c) Fiscal Period
                            d) Exchange Rate Type
                            e) Date (Exchange rate will be applied which is applicable on the given date)
    Input 3.          a)  key figures
                            b) Fiscal Year
                            c) Fiscal Period
                            d) Exchange Rate Type
                            e) Date (Exchange rate will be applied which is applicable on the given date)
    Input 4.          a)  key figures
                            b) Fiscal Year
                            c) Fiscal Period
                            d) Exchange Rate Type
                            e) Date (Exchange rate will be applied which is applicable on the given date)
    So we will have 4 key figures in the query results with the exchange rate applied on the given date.
    I will make four restricted key figures and make the query. I do not know how i can give date in each input for applying the exchange rate.
    Please give your suggestions to resolve my problem.
    Many thaks in advance.

    You can not bring the key figures in the selection screen for the currency translation. Instead you can apply a currency translation type to respective key figures in the query definition.
    The currency translation type can be defined in RSCUR transaction, where you can maintain the parameters like Exchange Rate Type, Exchange Rate Date etc.
    You can refer one of my article on this at
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/600157ec-44e5-2b10-abb0-dc9d06ba6c2f
    Hope this helps.
    Regards,
    Yogesh

  • Comparison query

    I have a table called vehicle_density. Following are the sample data of this table.
    Period          Vehicle       Location
    Jan-2009     300     Jayanagar
    Fab-2009     245     Jayanagar
    Mar-2009     236     Jayanagar
    Apr-2009     298     Jayanagar
    May-2009     325     Jayanagar
    Jun-2009     204     Jayanagar
    Jan-2009     568     BTM
    Fab-2009     585     BTM
    Mar-2009     401     BTM
    Apr-2009     565     BTM
    May-2009     621     BTM
    Jun-2009     425     BTM
    Jan-2009     145     RT Nagar
    Fab-2009     200     RT Nagar
    Mar-2009     254     RT Nagar
    Apr-2009     120     RT Nagar
    May-2009     282     RT Nagar
    Jun-2009     96     RT NagarNow I need a query to compare vehicle density between different locations for given time period.
    For example here is sample output vehicle density comparison between BTM and Jayanagar between Jan-2009 to Mar-2009
    Period       Jayanagar   BTM
    Jan-2009     300     568
    Fab-2009     245     585
    Mar-2009     236     401Same way density comparison between jayanagar, BTM and RT Nagar between Jan-2009 to Mar-2009 is below
    Period     Jayanagar     BTM     RT Nagar
    Jan-2009     300     568     145
    Fab-2009     245     585     200
    Mar-2009     236     401     254
    Apr-2009     298     565     120Can I do it in single sql query?
    Thanks,
    Sujnan

    Hi, Sujnan,
    Yes, you can do that in one query. It's called a pivot.
    One way is shown below. This example uses the COUNT function; for what you want, use SUM instead.
    Search for "pivot" or "rows to columns" for more examples.
    --     How to Pivot a Result Set (Display Rows as Columns)
    --     For Oracle 10, and earlier
    --     Actually, this works in any version of Oracle, but the
    --     "SELECT ... PIVOT" feature introduced in Oracle 11
    --     is better.  (See Query 2, below.)
    --     This example uses the scott.emp table.
    --     Given a query that produces three rows for every department,
    --     how can we show the same data in a query that has one row
    --     per department, and three separate columns?
    --     For example, the query below counts the number of employess
    --     in each departent that have one of three given jobs:
    PROMPT     ==========  0. Simple COUNT ... GROUP BY  ==========
    SELECT     deptno
    ,     job
    ,     COUNT (*)     AS cnt
    FROM     scott.emp
    WHERE     job     IN ('ANALYST', 'CLERK', 'MANAGER')
    GROUP BY     deptno
    ,          job;
    Output:
        DEPTNO JOB              CNT
            20 CLERK              2
            20 MANAGER            1
            30 CLERK              1
            30 MANAGER            1
            10 CLERK              1
            10 MANAGER            1
            20 ANALYST            2
    PROMPT     ==========  1. Pivot  ==========
    SELECT     deptno
    ,     COUNT (CASE WHEN job = 'ANALYST' THEN 1 END)     AS analyst_cnt
    ,     COUNT (CASE WHEN job = 'CLERK'   THEN 1 END)     AS clerk_cnt
    ,     COUNT (CASE WHEN job = 'MANAGER' THEN 1 END)     AS manager_cnt
    FROM     scott.emp
    WHERE     job     IN ('ANALYST', 'CLERK', 'MANAGER')
    GROUP BY     deptno;
    --     Output:
        DEPTNO ANALYST_CNT  CLERK_CNT MANAGER_CNT
            30           0          1           1
            20           2          2           1
            10           0          1           1
    --     Explanation
    (1) Decide what you want the output to look like.
         (E.g. "I want a row for each department,
         and columns for deptno, analyst_cnt, clerk_cnt and manager_cnt)
    (2) Get a result set where every row identifies which row
         and which column of the output will be affected.
         In the example above, deptno identifies the row, and
         job identifies the column.
         Both deptno and job happened to be in the original table.
         That is not always the case; sometimes you have to
         compute new columns based on the original data.
    (3) Use aggregate functions and CASE (or DECODE) to produce
         the pivoted columns. 
         The CASE statement will pick
         only the rows of raw data that belong in the column.
         If each cell in the output corresponds to (at most)
         one row of input, then you can use MIN or MAX as the
         aggregate function.
         If many rows of input can be reflected in a single cell
         of output, then use SUM, COUNT, AVG, STRAGG, or some other
         aggregate function.
         GROUP BY the column that identifies rows.
    PROMPT     ==========  2. Oracle 11 PIVOT  ==========
    WITH     e     AS
    (     -- Begin sub-query e to SELECT columns for PIVOT
         SELECT     deptno
         ,     job
         FROM     scott.emp
    )     -- End sub-query e to SELECT columns for PIVOT
    SELECT     *
    FROM     e
    PIVOT     (     COUNT (*)
              FOR     job     IN     ( 'ANALYST'     AS analyst
                             , 'CLERK'     AS clerk
                             , 'MANAGER'     AS manager
    NOTES ON ORACLE 11 PIVOT:
    (1) You must use a sub-query to select the raw columns.
    An in-line view (not shown) is an example of a sub-query.
    (2) GROUP BY is implied for all columns not in the PIVOT clause.
    (3) Column aliases are optional. 
    If "AS analyst" is omitted above, the column will be called 'ANALYST' (single-quotes included).
    {code}                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Exchange rate in query designer.

    would like to create a query on query designer,
    in the selection screen, there is choice of currency to a  year T and there is also choice of currency to a year T +1, the purpose of reporting is to compare the selling price of a product with two currencies different.
    Can you help me to design this query so complex.

    Hi,
      You will have to design two currency translation types for T and T+1.
    This design you should do in rscur transaction.
    But you should be maintaining the currency rates in ob08 table in BI.
    In rscur,you define your two currency translation types..Then these two
    currency translation types(say x and y) has to be used with keyFigure
    selling price..When you add selling price in keyfigure column, you can
    see one currency translation tab for that key figure..add x as the currency
    translation type.Similarly you can do for y too..
    Please assign points if it helped you

  • Daily Sales Total Comparison Query

    Hi Experts,
    I'm trying to make a query to get daily sales total for week and wish to make a graph.
    However, if there is no figures in credit note or Down payment invoice or invoice then query seems not showing any figures for particular date.
    I would be appreciated if anyone help this.
    SELECT DISTINCT
    GetDate(),
    SUM (DISTINCT T0.DocTotal) AS 'Daily INV Sum',
    SUM (DISTINCT T2.DocTotal) AS 'Daily DT INV Sum',
    SUM (DISTINCT T1.DocTotal*-1) AS 'Daily CR Sum',
    SUM (DISTINCT T0.DocTotal) + SUM (DISTINCT T2.DocTotal) - SUM (DISTINCT T1.DocTotal) AS 'Daily Sales Total'
    FROM OINV T0, ORIN T1, ODPI T2
    WHERE DateDiff(D,T0.DocDate,GetDate())=0 AND DateDiff(D,T1.DocDate,GetDate())=0 AND DateDiff(D,T2.DocDate,GetDate())=0
    UNION ALL
    SELECT
    DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 1, 0)),
    SUM (DISTINCT T0.DocTotal) AS 'Daily Sales Sum',
    SUM (DISTINCT T2.DocTotal) AS 'Daily DT INV Sum',
    SUM (DISTINCT T1.DocTotal*-1) AS 'Daily CR Sum',
    SUM (DISTINCT T0.DocTotal) + SUM (DISTINCT T2.DocTotal) - SUM (DISTINCT T1.DocTotal) AS 'Daily Sales Total'
    FROM OINV T0, ORIN T1, ODPI T2
    WHERE T0.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 1, 0)) AND T1.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 1, 0)) AND T2.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 1, 0))
    UNION ALL
    SELECT
    DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 2, 0)),
    SUM (DISTINCT T0.DocTotal) AS 'Daily Sales Sum',
    SUM (DISTINCT T2.DocTotal) AS 'Daily DT INV Sum',
    SUM (DISTINCT T1.DocTotal*-1) AS 'Daily CR Sum',
    SUM (DISTINCT T0.DocTotal) + SUM (DISTINCT T2.DocTotal) - SUM (DISTINCT T1.DocTotal) AS 'Daily Sales Total'
    FROM OINV T0, ORIN T1, ODPI T2
    WHERE T0.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 2, 0)) AND T1.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 2, 0)) AND T2.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 2, 0))
    UNION ALL
    SELECT
    DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 3, 0)),
    SUM (DISTINCT T0.DocTotal) AS 'Daily Sales Sum',
    SUM (DISTINCT T2.DocTotal) AS 'Daily DT INV Sum',
    SUM (DISTINCT T1.DocTotal*-1) AS 'Daily CR Sum',
    SUM (DISTINCT T0.DocTotal) + SUM (DISTINCT T2.DocTotal) - SUM (DISTINCT T1.DocTotal) AS 'Daily Sales Total'
    FROM OINV T0, ORIN T1, ODPI T2
    WHERE T0.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 3, 0)) AND T1.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 3, 0)) AND T2.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 3, 0))
    UNION ALL
    SELECT
    DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 4, 0)),
    SUM (DISTINCT T0.DocTotal) AS 'Daily Sales Sum',
    SUM (DISTINCT T2.DocTotal) AS 'Daily DT INV Sum',
    SUM (DISTINCT T1.DocTotal*-1) AS 'Daily CR Sum',
    SUM (DISTINCT T0.DocTotal) + SUM (DISTINCT T2.DocTotal) - SUM (DISTINCT T1.DocTotal) AS 'Daily Sales Total'
    FROM OINV T0, ORIN T1, ODPI T2
    WHERE T0.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 4, 0)) AND T1.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 4, 0)) AND T2.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 4, 0))
    UNION ALL
    SELECT
    DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 5, 0)),
    SUM (DISTINCT T0.DocTotal) AS 'Daily Sales Sum',
    SUM (DISTINCT T2.DocTotal) AS 'Daily DT INV Sum',
    SUM (DISTINCT T1.DocTotal*-1) AS 'Daily CR Sum',
    SUM (DISTINCT T0.DocTotal) + SUM (DISTINCT T2.DocTotal) - SUM (DISTINCT T1.DocTotal) AS 'Daily Sales Total'
    FROM OINV T0, ORIN T1, ODPI T2
    WHERE T0.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 5, 0)) AND T1.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 5, 0)) AND T2.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 5, 0))
    UNION ALL
    SELECT
    DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 6, 0)),
    SUM (DISTINCT T0.DocTotal) AS 'Daily Sales Sum',
    SUM (DISTINCT T2.DocTotal) AS 'Daily DT INV Sum',
    SUM (DISTINCT T1.DocTotal*-1) AS 'Daily CR Sum',
    SUM (DISTINCT T0.DocTotal) + SUM (DISTINCT T2.DocTotal) - SUM (DISTINCT T1.DocTotal) AS 'Daily Sales Total'
    FROM OINV T0, ORIN T1, ODPI T2
    WHERE T0.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 6, 0)) AND T1.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 6, 0)) AND T2.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 6, 0))
    UNION ALL
    SELECT
    DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 7, 0)),
    SUM (DISTINCT T0.DocTotal) AS 'Daily Sales Sum',
    SUM (DISTINCT T2.DocTotal) AS 'Daily DT INV Sum',
    SUM (DISTINCT T1.DocTotal*-1) AS 'Daily CR Sum',
    SUM (DISTINCT T0.DocTotal) + SUM (DISTINCT T2.DocTotal) - SUM (DISTINCT T1.DocTotal) AS 'Daily Sales Total'
    FROM OINV T0, ORIN T1, ODPI T2
    WHERE T0.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 7, 0)) AND T1.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 7, 0)) AND T2.DocDate = DATEADD(dd, 0, DATEADD(dd, DATEDIFF(dd, 0, GetDate()) - 7, 0))

    Could you let me know how to make pivot query?
                        AR INV TOTAL  |  AR Down Payment Total  | AR Credit Total  | (AR INV TOTAL+AR DP TOTAL-AR CREDIT TOTAL)
    Today's Sales
    Yesterday
    Until Week Before

  • Data Rate connection query / SNR

    Hi All
    I have BT Infinity upto 40MB at my local cabinet.
    The BT checker says my line can get upto 24MB which I've always been happy with.
    I have always got around the 19MB mark but looking at my connection stats I am wondering if I should get more speed?
    My Data Rate for connection is always 19991 and SNR up is aroud 6 and SNR down is usually 15-20.
    You can see my current connection stats in the table below.
    I don't understand why the 'Attainable Rate' shows as '40867'.
    Am I missing something simple?
    Cheers for looking
    Craig
    VDSL
    Link Status
    Showtime
    Firmware Version
    1412f0
    VDSL2 Profile
    17a
    Basic Status
    Upstream
    Downstream
    Unit
    Actual Data Rate
    6799
    19991
    Kb/s
    SNR
    58
    179
    0.1dB
    Advance Status
    Upstream
    Downstream
    Unit
    Actual delay
    0
    0
    ms
    Actual INP
    0
    0
    0.1 symbols
    15M CV
    0
    0
    counter
    1Day CV
    16
    81
    counter
    15M FEC
    0
    0
    counter
    1Day FEC
    34
    636
    counter
    Total FEC
    283
    13238678
    counter
    Previous Data Rate
    6811
    19991
    Kbps
    Attainable Rate
    6799
    40867
    Kbps
    Electrical Length
    200
    200
    0.1 dB
    SNR Margin
    58
    N/A
    (US0,--) 0.1 dB
    SNR Margin
    59
    180
    (US1,DS1) 0.1 dB
    SNR Margin
    N/A
    179
    (US2,DS2) 0.1 dB
    SNR Margin
    N/A
    N/A
    (US3,DS3) 0.1 dB
    SNR Margin
    N/A
    N/A
    (US4,DS4) 0.1 dB
    15M Elapsed time
    20
    20
    secs
    15M FECS
    0
    0
    counter
    15M ES
    0
    0
    counter
    15M SES
    0
    0
    counter
    15M LOSS
    0
    0
    counter
    15M UAS
    0
    0
    counter
    1Day Elapsed time
    6321
    6321
    secs
    1Day FECS
    4
    71
    counter
    1Day ES
    7
    56
    counter
    1Day SES
    0
    0
    counter
    1Day LOSS
    0
    0
    counter
    1Day UAS
    76
    76
    counter
    Total FECS
    149
    135577
    counter
    Total ES
    7917
    38096
    counter
    Total SES
    13
    76
    counter
    Total LOSS
    0
    10
    counter
    Total UAS
    180
    625
    counter
    Solved!
    Go to Solution.

    You can test your line here https://www.bt.com/consumerFaultTracking/public/faults/reporting.do?pageId=21
    Is there any noise on the phone?
    If you have a line fault, you need an engineer to fix it.
    Once the instability is resolved, DLM will automatically raise the speed so your SNR Margin goes to 6 when you resync.
    If you found this post helpful, please click on the star on the left
    If not, I'll try again

  • Time comparison query help

    Hi All,
    Please help me to write a query to compare the timestamps and filter the data with the below intervals.
    From current date @3.30 AM to 2.30 PM
    From current date @2.30 PM to (current date+1) 3.30 AM
    Input data
    2012-08-13 03:30:00.000
    2012-08-13 04:10:49.954
    2012-08-13 08:10:49.972
    2012-08-13 11:29:33.095
    2012-08-13 14:29:33.112
    2012-08-13 17:29:33.128
    2012-08-14 02:29:33.128

    with testdata as (
    select to_timestamp('2012-08-13 03:30:00.00','yyyy-mm-dd hh24:mi:ss:ff') d from dual union all
    select to_timestamp('2012-08-13 04:10:49.95','yyyy-mm-dd hh24:mi:ss:ff') from dual union all
    select to_timestamp('2012-08-13 08:10:49.97','yyyy-mm-dd hh24:mi:ss:ff') from dual union all
    select to_timestamp('2012-08-13 11:29:33.09','yyyy-mm-dd hh24:mi:ss:ff') from dual union all
    select to_timestamp('2012-08-13 14:29:33.11','yyyy-mm-dd hh24:mi:ss:ff') from dual union all
    select to_timestamp('2012-08-13 17:29:33.12','yyyy-mm-dd hh24:mi:ss:ff') from dual union all
    select to_timestamp('2012-08-14 02:29:33.12','yyyy-mm-dd hh24:mi:ss:ff') from dual
    select
    d
    ,case
    when d >= to_date('2012-08-13','YYYY-MM-DD') + interval '3:30' HOUR to MINUTE
      and d <  to_date('2012-08-13','YYYY-MM-DD') + interval '14:30' HOUR to MINUTE
    then 'First'
    when d >= to_date('2012-08-13','YYYY-MM-DD') + interval '14:30' HOUR to MINUTE
      and d <  to_date('2012-08-13','YYYY-MM-DD') + 1 + interval '3:30' HOUR to MINUTE
    then 'Second'
    else 'None'
    end "Date-Interval"
    from testdata
    D Date-Interval
    "13/08/2012 03:30:00,000000000" "First"
    "13/08/2012 04:10:49,950000000" "First"
    "13/08/2012 08:10:49,970000000" "First"
    "13/08/2012 11:29:33,090000000" "First"
    "13/08/2012 14:29:33,110000000" "First"
    "13/08/2012 17:29:33,120000000" "Second"
    "14/08/2012 02:29:33,120000000" "Second"

  • Problem with jump query - RSBBS

    Hi Gurus,
    I have a main query - called the 'Manager Hierarchy'. For this, I have maintained two jump queries namely - 'Job Changes' and 'Pay Rate Changes'.
    In my queries 'Job Changes' and 'Pay Rate Changes', I have maintained two variables - 'employee' and 'calendar year'. If I individually run these two, it runs fine. But when I include these two as jump queries for the Manager Hierarchy one, the execution stops with the following messages:
    Warning: Invalid filter on 0EMPLOYEE: Filter changed
    Warning: Invalid filter on 0CALYEAR: Filter changed
    Diagnosis:
    you tried to filter on a characteristic with active presentation hierarchy by a node of another hierarchy.
    This can only be done by switching the hierarchy or for example by using the report/report interface.
    System Response:
    The filter is not evaluated for the characteristic, but removed instead.
    Can anyone please let me know the possible remedies after going through the log ?
    Full points will be assigned.
    Regards,
    Srinivas

    Hi Shrinivas
    Try to put employee and calyear in the row of Manager Hierarchy query and check whether it works or not.
    If yes then put calyear in the row and make it as Non display then check it works or not.
    or make both the variables ie employee and calyear as optional in 'job Changes' and 'Pay Rate Changes' query
    Regards
    Rohit
    Edited by: Rohit Deshmukh on May 29, 2008 2:55 PM

  • Schema Table Comparison

    Hi All,
    I've got 2 schemas with identical tables.
    I want to do a minus on the tables but would like to do this with a procedure that then reports the change into a <table_name>_diff table for each - This table should show records that are in schema1 but not in 2 and records that are in schema 2 but not in 1.
    There are about 40 tables in total so a proc rather than doing it all manually would be superb...
    Any ideas ?

    Hi ,
    I have found somewhere in the net the following code......
    REM
    REM Edit the following three DEFINE statements to customize this script
    REM to suit your needs.
    REM
    REM Tables to be compared:
    DEFINE table_criteria = "table_name = table_name" -- all tables
    REM DEFINE table_criteria = "table_name != 'TEST'"
    REM DEFINE table_criteria = "table_name LIKE 'LOOKUP%' OR table_name LIKE 'C%'"
    REM Columns to be compared:
    DEFINE column_criteria = "column_name = column_name" -- all columns
    REM DEFINE column_criteria = "column_name NOT IN ('CREATED', 'MODIFIED')"
    REM DEFINE column_criteria = "column_name NOT LIKE '%_ID'"
    REM Database link to be used to access the remote schema:
    DEFINE dblink = "remote_db"
    SET SERVEROUTPUT ON SIZE 1000000
    SET VERIFY OFF
    DECLARE
      CURSOR c_tables IS
        SELECT   table_name
        FROM     user_tables
        WHERE    &table_criteria
        ORDER BY table_name;
      CURSOR c_columns (cp_table_name IN VARCHAR2) IS
        SELECT   column_name, data_type
        FROM     user_tab_columns
        WHERE    table_name = cp_table_name
        AND      &column_criteria
        ORDER BY column_id;
      TYPE t_char80array IS TABLE OF VARCHAR2(80) INDEX BY BINARY_INTEGER;
      v_column_list     VARCHAR2(32767);
      v_total_columns   INTEGER;
      v_skipped_columns INTEGER;
      v_count1          INTEGER;
      v_count2          INTEGER;
      v_rows_fetched    INTEGER;
      v_column_pieces   t_char80array;
      v_piece_count     INTEGER;
      v_pos             INTEGER;
      v_length          INTEGER;
      v_next_break      INTEGER;
      v_same_count      INTEGER := 0;
      v_diff_count      INTEGER := 0;
      v_error_count     INTEGER := 0;
      v_warning_count   INTEGER := 0;
      -- Use dbms_sql instead of native dynamic SQL so that Oracle 7 and Oracle 8
      -- folks can use this script.
      v_cursor          INTEGER := dbms_sql.open_cursor;
    BEGIN
      -- Iterate through all tables in the local database that match the
      -- specified table criteria.
      FOR r1 IN c_tables LOOP
        -- Build a list of columns that we will compare (those columns
        -- that match the specified column criteria). We will skip columns
        -- that are of a data type not supported (LOBs and LONGs).
        v_column_list := NULL;
        v_total_columns := 0;
        v_skipped_columns := 0;
        FOR r2 IN c_columns (r1.table_name) LOOP
          v_total_columns := v_total_columns + 1;
          IF r2.data_type IN ('BLOB', 'CLOB', 'NCLOB', 'LONG', 'LONG RAW') THEN
            -- The column's data type is one not supported by this script (a LOB
            -- or a LONG). We'll enclose the column name in comment delimiters in
            -- the column list so that the column is not used in the query.
            v_skipped_columns := v_skipped_columns + 1;
            IF v_column_list LIKE '%,' THEN
              v_column_list := RTRIM (v_column_list, ',') ||
                               ' /*, "' || r2.column_name || '" */,';
            ELSE
              v_column_list := v_column_list || ' /* "' || r2.column_name ||'" */ ';
            END IF;
          ELSE
            -- The column's data type is supported by this script. Add the column
            -- name to the column list for use in the data comparison query.
            v_column_list := v_column_list || '"' || r2.column_name || '",';
          END IF;
        END LOOP;
        -- Compare the data in this table only if it contains at least one column
        -- whose data type is supported by this script.
        IF v_total_columns > v_skipped_columns THEN
          -- Trim off the last comma from the column list.
          v_column_list := RTRIM (v_column_list, ',');
          BEGIN
            -- Get a count of rows in the local table missing from the remote table.
            dbms_sql.parse
            v_cursor,
            'SELECT COUNT(*) FROM (' ||
            'SELECT ' || v_column_list || ' FROM "' || r1.table_name || '"' ||
            ' MINUS ' ||
            'SELECT ' || v_column_list || ' FROM "' || r1.table_name ||'"@&dblink)',
            dbms_sql.native
            dbms_sql.define_column (v_cursor, 1, v_count1);
            v_rows_fetched := dbms_sql.execute_and_fetch (v_cursor);
            IF v_rows_fetched = 0 THEN
              RAISE NO_DATA_FOUND;
            END IF;
            dbms_sql.column_value (v_cursor, 1, v_count1);
            -- Get a count of rows in the remote table missing from the local table.
            dbms_sql.parse
            v_cursor,
            'SELECT COUNT(*) FROM (' ||
            'SELECT ' || v_column_list || ' FROM "' || r1.table_name ||'"@&dblink'||
            ' MINUS ' ||
            'SELECT ' || v_column_list || ' FROM "' || r1.table_name || '")',
            dbms_sql.native
            dbms_sql.define_column (v_cursor, 1, v_count2);
            v_rows_fetched := dbms_sql.execute_and_fetch (v_cursor);
            IF v_rows_fetched = 0 THEN
              RAISE NO_DATA_FOUND;
            END IF;
            dbms_sql.column_value (v_cursor, 1, v_count2);
            -- Display our findings.
            IF v_count1 = 0 AND v_count2 = 0 THEN
              -- No data discrepencies were found. Report the good news.
              dbms_output.put_line
              r1.table_name || ' - Local and remote table contain the same data'
              v_same_count := v_same_count + 1;
              IF v_skipped_columns = 1 THEN
                dbms_output.put_line
                r1.table_name || ' - Warning: 1 LOB or LONG column was omitted ' ||
                'from the comparison'
                v_warning_count := v_warning_count + 1;
              ELSIF v_skipped_columns > 1 THEN
                dbms_output.put_line
                r1.table_name || ' - Warning: ' || TO_CHAR (v_skipped_columns) ||
                ' LOB or LONG columns were omitted from the comparison'
                v_warning_count := v_warning_count + 1;
              END IF;
            ELSE
              -- There is a discrepency between the data in the local table and
              -- the remote table. First, give a count of rows missing from each.
              IF v_count1 > 0 THEN
                dbms_output.put_line
                r1.table_name || ' - ' ||
                LTRIM (TO_CHAR (v_count1, '999,999,990')) ||
                ' rows on local database missing from remote'
              END IF;
              IF v_count2 > 0 THEN
                dbms_output.put_line
                r1.table_name || ' - ' ||
                LTRIM (TO_CHAR (v_count2, '999,999,990')) ||
                ' rows on remote database missing from local'
              END IF;
              IF v_skipped_columns = 1 THEN
                dbms_output.put_line
                r1.table_name || ' - Warning: 1 LOB or LONG column was omitted ' ||
                'from the comparison'
                v_warning_count := v_warning_count + 1;
              ELSIF v_skipped_columns > 1 THEN
                dbms_output.put_line
                r1.table_name || ' - Warning: ' || TO_CHAR (v_skipped_columns) ||
                ' LOB or LONG columns were omitted from the comparison'
                v_warning_count := v_warning_count + 1;
              END IF;
              -- Next give the user a query they could run to see all of the
              -- differing data between the two tables. To prepare the query,
              -- first we'll break the list of columns in the table into smaller
              -- chunks, each short enough to fit on one line of a telnet window
              -- without wrapping.
              v_pos := 1;
              v_piece_count := 0;
              v_length := LENGTH (v_column_list);
              LOOP
                EXIT WHEN v_pos = v_length;
                v_piece_count := v_piece_count + 1;
                IF v_length - v_pos < 72 THEN
                  v_column_pieces(v_piece_count) := SUBSTR (v_column_list, v_pos);
                  v_pos := v_length;
                ELSE
                  v_next_break :=
                    GREATEST (INSTR (SUBSTR (v_column_list, 1, v_pos + 72),
                                     ',"', -1),
                              INSTR (SUBSTR (v_column_list, 1, v_pos + 72),
                                     ',/* "', -1),
                              INSTR (SUBSTR (v_column_list, 1, v_pos + 72),
                                     ' /* "', -1));
                  v_column_pieces(v_piece_count) :=
                    SUBSTR (v_column_list, v_pos, v_next_break - v_pos + 1);
                  v_pos := v_next_break + 1;
                END IF;
              END LOOP;
              dbms_output.put_line ('Use the following query to view the data ' ||
                                    'discrepencies:');
              dbms_output.put_line ('(');
              dbms_output.put_line ('SELECT ''Local'' "LOCATION",');
              FOR i IN 1..v_piece_count LOOP
                dbms_output.put_line (v_column_pieces(i));
              END LOOP;
              dbms_output.put_line ('FROM "' || r1.table_name || '"');
              dbms_output.put_line ('MINUS');
              dbms_output.put_line ('SELECT ''Local'' "LOCATION",');
              FOR i IN 1..v_piece_count LOOP
                dbms_output.put_line (v_column_pieces(i));
              END LOOP;
              dbms_output.put_line ('FROM "' || r1.table_name || '"@&dblink');
              dbms_output.put_line (') UNION ALL (');
              dbms_output.put_line ('SELECT ''Remote'' "LOCATION",');
              FOR i IN 1..v_piece_count LOOP
                dbms_output.put_line (v_column_pieces(i));
              END LOOP;
              dbms_output.put_line ('FROM "' || r1.table_name || '"@&dblink');
              dbms_output.put_line ('MINUS');
              dbms_output.put_line ('SELECT ''Remote'' "LOCATION",');
              FOR i IN 1..v_piece_count LOOP
                dbms_output.put_line (v_column_pieces(i));
              END LOOP;
              dbms_output.put_line ('FROM "' || r1.table_name || '"');
              dbms_output.put_line (');');
              v_diff_count := v_diff_count + 1;
            END IF;
          EXCEPTION
            WHEN OTHERS THEN
              -- An error occurred while processing this table. (Most likely it
              -- doesn't exist or has fewer columns on the remote database.)
              -- Show the error we encountered on the report.
              dbms_output.put_line (r1.table_name || ' - ' || SQLERRM);
              v_error_count := v_error_count + 1;
          END;
        END IF;
      END LOOP;
      -- Print summary information.
      dbms_output.put_line ('-------------------------------------------------');
      dbms_output.put_line
      'Tables examined: ' || TO_CHAR (v_same_count + v_diff_count + v_error_count)
      dbms_output.put_line
      'Tables with data discrepencies: ' || TO_CHAR (v_diff_count)
      IF v_warning_count > 0 THEN
        dbms_output.put_line
        'Tables with warnings: ' || TO_CHAR(v_warning_count)
      END IF;
      IF v_error_count > 0 THEN
        dbms_output.put_line
        'Tables that could not be checked due to errors: ' || TO_CHAR(v_error_count)
      END IF;
      dbms_sql.close_cursor (v_cursor);
    END;I hope , it ' ll help you...!!!!
    Regards,
    Simon

  • Q51: How to divide 'RDR1.Price' by 'RDR1.Rate' when blank

    Dear All,
    It appears my report as below is reporting values in £ or $ so I assume I need to introduce the 'Rate' field to bring all values back to Sterling.
    However, when a sales order is in sterling the 'Rate' field is left blank and I get an error message whenever I try to introduce the 'Rate' field - Query does not like dividing by 'blank'
    Any ideas?
    SELECT
    t0.cardcode as 'Customer Code',
    t3.slpname as 'Sales Person',
    sum(((T1.OpenQty)*(T1.Price))-t0.discsum) as 'Sales Value'
    FROM
    ordr t0 inner join rdr1 t1 on t0.docentry = t1.docentry
    inner join ocrd t2 on t0.cardcode = t2.cardcode
    inner join oslp t3 on t2.slpcode = t3.slpcode
    WHERE
    t0.docduedate between and and
    t1.linestatus = 'O' and
    isnull (t0.u_forecast,'N') !='Y'
    GROUP BY
    t0.cardcode,
    t3.slpname,
    t0.discsum
    ORDER BY t0.cardcode
    Robin

    Hi Robin,
    Try:
    SELECT
    t0.cardcode as 'Customer Code',
    t3.slpname as 'Sales Person',
    sum(T1.OpenSum) as 'Sales Value'
    FROM
    ordr t0 inner join rdr1 t1 on t0.docentry = t1.docentry
    inner join ocrd t2 on t0.cardcode = t2.cardcode
    inner join oslp t3 on t2.slpcode = t3.slpcode
    WHERE
    t0.docduedate between \[%0\] and \[%1\] and
    t1.linestatus = 'O' and
    isnull (t0.u_forecast,'N') !='Y'
    GROUP BY
    t0.cardcode,
    t3.slpname,
    t0.discsum
    ORDER BY t0.cardcode
    You have OpenSum and OpenSumFC so hoepuflly OpenSum will always return it using your local currency.
    Regards,
    Adrian

  • SNMP, Query dot1dStpPortState on Catalyst 2960-S

    Hi Community,
    I would like to be able to query the dot1dStpPortState obect on the Catalyst 2960-S on our LAN . Im running firmware
    c2960s-universalk9-mz.122-55.SE2.bin and according to the Cisco SNMP Object Navigator the object is supported (via the BRIDGE-MIB).
    However when i query using snmpwalk from my workstation :
    snmpwalk -v 2c -c bic-zua-ro 10.u.y.x 1.3.6.1.2.1.17.2.15.1.3
    I recieve and error .
    SNMPv2-SMI::mib-2.17.2.15.1.3 = No Such Instance currently exists at this OID
    For the sake of comparison, querying our 4700 :
    snmpwalk -v 2c -c bic-zua-ro 10.u.y.x 1.3.6.1.2.1.17.2.15.1.3
    returns (as expected, cropped)
    SNMPv2-SMI::mib-2.17.2.15.1.3.1 = INTEGER: 5
    SNMPv2-SMI::mib-2.17.2.15.1.3.3 = INTEGER: 5
    SNMPv2-SMI::mib-2.17.2.15.1.3.40 = INTEGER: 5
    SNMPv2-SMI::mib-2.17.2.15.1.3.67 = INTEGER: 5
    SNMPv2-SMI::mib-2.17.2.15.1.3.104 = INTEGER: 5
    SNMPv2-SMI::mib-2.17.2.15.1.3.257 = INTEGER: 5
    SNMPv2-SMI::mib-2.17.2.15.1.3.258 = INTEGER: 5
    SNMPv2-SMI::mib-2.17.2.15.1.3.259 = INTEGER: 5
    Is there some special configuration i need to do on our 2960's. The only snmp related settings i can see in the running config is snmp-server community. In this case :
    snmp-server community bic-zua-ro RO
    Thanks in advance for any comments/ assistance.
    Rgds
    Ian

    Hi Vinod,
    Wow, thanks for your prompt reply. Output from filtered running config pasted below
    TVS-Stack17#sh run | inclu snmp
    snmp-server community bic-zua-ro RO
    Interestingly when i walk the entire dot1dBridge (1.3.6.1.2.1.17) i recieve lots of data from both dot1dBase (1) and dot1dTp (4) but nothing from dot1dStp (2)
    I tried portAdditionalOperStatus and did not recieve any response but got lots of data from its patent  portEntry (1)
    Running show spann on the 2960 stack i can see various ports in forwarding and blocking start as i would expect.
    Rgds,
    Ian

  • Getting a dumb after writing a SELECT statement at TABLE level of SMART

    hi all,
    i have created a RATE COMPARISON REPORT.In that Report i have to display Rate Price against  RFQ No.For this purpose i have write a code at SMART FORMS in TABLE.Following are the code:
    SELECT SINGLE netpr
      INTO it_detail
      FROM ekpo
      INNER JOIN ekko ON
      ekko~ebeln = ekpo~ebeln
      WHERE ekko~BSTYP EQ 'F'
      AND   ekko~spras eq 'E'
    AND   ekpo~ebeln eq IT_DETAIL-EBELN
    This code is assign with TEXT FILED  at TABLE level.when i execute it i get the following dump.
    An exception occurred that is explained in detail below.
    The exception, which is assigned to class 'CXSY_OPEN_SQL_DB', was not caught in procedure "%CO2" "(FORM)"_,
    nor was it propagated by a RAISING clause.Since the caller of the procedure could not have anticipated that the exception would occur, the current program is terminated.
    The reason for the exception is:
    In a SELECT access, the read file could not be placed in the target field provided. Either the conversion is not supported for   the type of the target field, the target field is too small to include the value, or the data does not have the format required for the target field.
    could any body tell me what is problem in my SELECT query statement after executing giving me a dump.
    Thanks,
    sappk25

    Hi
    The select query is wrong, as you should not move the single field into a table (it_detail)....
    Change the query as follows...
    data: v_netwr type netwr.
    SELECT SINGLE netpr
      INTO v_netwr
      FROM ekpo
      INNER JOIN ekko ON
      ekkoebeln = ekpoebeln
      WHERE ekko~BSTYP EQ 'F'
      AND   ekko~spras eq 'E'
    AND   ekpo~ebeln eq IT_DETAIL-EBEL
    if sy-subrc eq 0.
      move v_netwr to it_detail.
    endif.
    Hope this helps....

  • Looking for help to increase performance on a DB XML database.

    I'll try to answer all the questions in the Performance Questionnaire from here.
    1) I'm primarily concerned with insertion performance. The best I've seen so far is about 6000 inserts/per second. This is running inside a VMWare VM with 3 GB of RAM. The VM is set up with 2 CPUs each with 2 cores. The host machine has 8GB of RAM with a dual core 2.67 GHZ i7 (2 logical cores per CPU). The best performance I've seen is by running 2 threads of execution. A single thread only gets me about 2500 inserts per/second.
    This is all within a very simple, isolate program. I'm trying to determine how to re-architect a more complicated system, but if I can't hope to hit 10k inserts per second with my sample, I don't see how it's possible to expand this out to something more complicated.
    2) Versions: BDBXML version 2.5.26 no special patches or config options
    3) BDB version 4.8.26, no special patches
    4) 2.67 dual core, hyperthreaded intel i7 (4 logical processors)
    5) Host: Windows 7 64-bit, Guest: RHEL5 64-bit
    6) Underlying disk is a 320GB WesternDigital barricuda (SATA). It's a laptop harddrive, I believe it's only 5400 RPM. Although the VM does not have exclusive access to the drive, it is not the same drive as the Host sytem drive. (i.e. Windows runs off of the C drive, this is the D drive). The has a 60GB slice of this drive.
    7) Drive is NTFS formatted for the host. Guest, ext3
    8) Host 8gb, guest 3gb (total usage when running tests low, i.e. no swapping by guest or host)
    9) not currently using any replication
    10) Not using remote filesystem
    11) db_cv_mutex=POSIX/pthreads/library/x86_64/gcc-assembly
    12) Using the C++ API for DBXML, and the C API for BDB
    using gcc/g++ version 4.1.2
    13) not using app server or web server
    14) flags to 'DB_ENV->open()': | DB_SYSTEM_MEM
              | DB_INIT_MPOOL
              | DB_INIT_LOCK
              | DB_INIT_LOG
              | DB_INIT_TXN
              | DB_RECOVER
              | DB_THREAD
    other env flags explicitly set:
    DB_LOG_IN_MEMORY 1
    DB_LOG_ZERO 1
    set_cachesize(env, 1, 0, 1) // 1GB cache in single block
    DB_TXN_NOSYNC 1
    DB_TXN_WRITE_NOSYNC 1
    I am not using a DB_CONFIG file at this time.
    15) For the container config:
    transactional true
    transactionsNotDurable true
    containertype wholedoc
    indexNodes Off
    pagesize 4096
    16) In my little test program, I have a single container.
    16.1) flags are the same as listed above.
    16.2) I've tried with an empty container, and one with documents already inside and haven't noticed much difference at this point. I'm running 1, 2, 3, or 4 threads, each inserting 10k documents in a loop. Each insert is a single transaction.
    16.3) Wholedoc (tried both node & wholedoc, I believe wholedoc was slightly faster).
    16.4) The best performance I've seen is with a smaller document that is about 500 bytes.
    16.5) I'm not currently using any document data.
    17)sample document:
    <?xml version='1.0' encoding='UTF-8' standalone='no'?>
    <Record xmlns='http://someurl.com/test' JID='UUID-f9032e9c-7e9a-4f2c-b40e-621b0e66c47f'>
    <DataType>journal</DataType>
    <RecordID>f9032e9c-7e9a-4f2c-b40e-621b0e66c47f</RecordID>
    <Hostname>test.foo.com</Hostname>
    <HostUUID>34c90268-57ba-4d4c-a602-bdb30251ec77</HostUUID>
    <Timestamp>2011-11-10T04:09:55-05:00</Timestamp>
    <ProcessID>0</ProcessID>
    <User name='root'>0</User>
    <SecurityLabel>unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023</SecurityLabel>
    </Record>
    18. As mentioned, I'm looked to get at least 10k documents per second for insertion. Updates are much more infrequent, and can run slower. I am not doing any partial updates, or replacing documents. In the actual system, there are minor updates that happen to document metadata, but again, these can be slower.
    19. I'm primarily concerned with insertion rate, not query.
    20. Xquery samples are not applicable at the moment.
    21. I am using transactions, no special flags aside from setting them all to 'not durable'
    22. Log files are currently stored on the same disk as the database.
    23. I'm not using AUTO_COMMIT
    24. I don't believe there are any non-transactional operations
    25. best performance from 2 threads doing insertions
    26. The primary way I've been testing performance is by using the 'clock_gettime(CLOCK_REALTIME)' calls inside my test program. The test program spawns 1 or more threads, each thread inserts 10k documents. The main thread waits for all the threads to complete, then exits. I'm happy to send the source code for this program if that would be helpful.
    27. As mentioned, I'm hoping to get at least 10k inserts per second.
    28. db_stat outputs:
    28.1 db_stat -c:
    93 Last allocated locker ID
    0x7fffffff Current maximum unused locker ID
    9 Number of lock modes
    1000 Maximum number of locks possible
    1000 Maximum number of lockers possible
    1000 Maximum number of lock objects possible
    40 Number of lock object partitions
    0 Number of current locks
    166 Maximum number of locks at any one time
    5 Maximum number of locks in any one bucket
    0 Maximum number of locks stolen by for an empty partition
    0 Maximum number of locks stolen for any one partition
    0 Number of current lockers
    35 Maximum number of lockers at any one time
    0 Number of current lock objects
    95 Maximum number of lock objects at any one time
    3 Maximum number of lock objects in any one bucket
    0 Maximum number of objects stolen by for an empty partition
    0 Maximum number of objects stolen for any one partition
    565631 Total number of locks requested
    542450 Total number of locks released
    0 Total number of locks upgraded
    29 Total number of locks downgraded
    22334 Lock requests not available due to conflicts, for which we waited
    23181 Lock requests not available due to conflicts, for which we did not wait
    0 Number of deadlocks
    0 Lock timeout value
    0 Number of locks that have timed out
    0 Transaction timeout value
    0 Number of transactions that have timed out
    784KB The size of the lock region
    10098 The number of partition locks that required waiting (0%)
    866 The maximum number of times any partition lock was waited for (0%)
    6 The number of object queue operations that required waiting (0%)
    7220 The number of locker allocations that required waiting (2%)
    0 The number of region locks that required waiting (0%)
    3 Maximum hash bucket length
    ====================
    28.2 db_stat -l:
    0x40988 Log magic number
    16 Log version number
    31KB 256B Log record cache size
    0 Log file mode
    10Mb Current log file size
    0 Records entered into the log
    0 Log bytes written
    0 Log bytes written since last checkpoint
    0 Total log file I/O writes
    0 Total log file I/O writes due to overflow
    0 Total log file flushes
    7 Total log file I/O reads
    1 Current log file number
    28 Current log file offset
    1 On-disk log file number
    28 On-disk log file offset
    0 Maximum commits in a log flush
    0 Minimum commits in a log flush
    160KB Log region size
    0 The number of region locks that required waiting (0%)
    ======================
    28.3 db_stat -m
    1GB Total cache size
    1 Number of caches
    1 Maximum number of caches
    1GB Pool individual cache size
    0 Maximum memory-mapped file size
    0 Maximum open file descriptors
    0 Maximum sequential buffer writes
    0 Sleep after writing maximum sequential buffers
    0 Requested pages mapped into the process' address space
    1127961 Requested pages found in the cache (99%)
    3622 Requested pages not found in the cache
    7590 Pages created in the cache
    3622 Pages read into the cache
    7663 Pages written from the cache to the backing file
    0 Clean pages forced from the cache
    0 Dirty pages forced from the cache
    0 Dirty pages written by trickle-sync thread
    11212 Current total page count
    11212 Current clean page count
    0 Current dirty page count
    131071 Number of hash buckets used for page location
    4096 Assumed page size used
    1142798 Total number of times hash chains searched for a page
    1 The longest hash chain searched for a page
    1127988 Total number of hash chain entries checked for page
    0 The number of hash bucket locks that required waiting (0%)
    0 The maximum number of times any hash bucket lock was waited for (0%)
    4 The number of region locks that required waiting (0%)
    0 The number of buffers frozen
    0 The number of buffers thawed
    0 The number of frozen buffers freed
    11218 The number of page allocations
    0 The number of hash buckets examined during allocations
    0 The maximum number of hash buckets examined for an allocation
    0 The number of pages examined during allocations
    0 The max number of pages examined for an allocation
    0 Threads waited on page I/O
    0 The number of times a sync is interrupted
    Pool File: temp.dbxml
    4096 Page size
    0 Requested pages mapped into the process' address space
    1127961 Requested pages found in the cache (99%)
    3622 Requested pages not found in the cache
    7590 Pages created in the cache
    3622 Pages read into the cache
    7663 Pages written from the cache to the backing file
    =================================
    28.4 db_stat -r (n/a, no replication)
    28.5 db_stat -t
    0/0 No checkpoint LSN
    Tue Oct 30 15:05:29 2012 Checkpoint timestamp
    0x8001d4d5 Last transaction ID allocated
    100 Maximum number of active transactions configured
    0 Active transactions
    5 Maximum active transactions
    120021 Number of transactions begun
    0 Number of transactions aborted
    120021 Number of transactions committed
    0 Snapshot transactions
    0 Maximum snapshot transactions
    0 Number of transactions restored
    48KB Transaction region size
    1385 The number of region locks that required waiting (0%)
    Active transactions:

    Replying with output from iostat & vmstat (including the output exceeded the character count).
    =============================
    output of vm_stat while running 4 threads, inserting 10k documents each. It took just under 18 seconds to complete. I ran vmstat a few times while it was running:
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 0 0 896904 218004 1513268 0 0 14 30 261 83 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 889588 218004 1520500 0 0 14 30 261 84 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 882892 218012 1527124 0 0 14 30 261 84 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    4 0 0 896664 218012 1533284 0 0 14 30 261 85 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 890456 218012 1539748 0 0 14 30 261 85 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 884256 218020 1545800 0 0 14 30 261 86 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    4 0 0 878304 218020 1551520 0 0 14 30 261 86 1 1 98 0 0
    $ sudo vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 871980 218028 1558108 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    5 0 0 865780 218028 1563828 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 0 0 859332 218028 1570108 0 0 14 30 261 87 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    2 0 0 586756 218028 1572660 0 0 14 30 261 88 1 1 98 0 0
    $ vmstat
    procs -----------memory---------- ---swap-- -----io---- system -----cpu------
    r b swpd free buff cache si so bi bo in cs us sy id wa st
    3 2 0 788032 218104 1634624 0 0 14 31 261 88 1 1 98 0 0
    ================================
    sda1 is mount on /boot
    sda2 is mounted on /
    sda3 is swap space
    output for iostat, same scenario, 4 threads inserting 10k documents each:
    $ iostat -x 1
    Linux 2.6.18-308.4.1.el5 (localhost.localdomain) 10/30/2012
    avg-cpu: %user %nice %system %iowait %steal %idle
    27.43 0.00 4.42 1.18 0.00 66.96
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 46.53 0.00 2.97 0.00 396.04 133.33 0.04 14.33 14.33 4.26
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 46.53 0.00 2.97 0.00 396.04 133.33 0.04 14.33 14.33 4.26
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    26.09 0.00 15.94 0.00 0.00 57.97
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    26.95 0.00 29.72 0.00 0.00 43.32
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.90 0.00 32.16 0.00 0.00 37.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    40.51 0.00 27.85 0.00 0.00 31.65
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    40.50 0.00 26.75 0.50 0.00 32.25
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.03 17.00 17.00 3.40
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.03 17.00 17.00 3.40
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    30.63 0.00 32.91 0.00 0.00 36.46
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.57 0.00 32.83 0.00 0.00 37.59
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    29.65 0.00 32.41 0.00 0.00 37.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    46.70 0.00 26.40 0.00 0.00 26.90
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    32.72 0.00 33.25 0.00 0.00 34.04
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 7.00 0.00 57.00 0.00 512.00 8.98 2.25 39.54 0.82 4.70
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 7.00 0.00 57.00 0.00 512.00 8.98 2.25 39.54 0.82 4.70
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    32.08 0.00 31.83 0.00 0.00 36.09
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.75 0.00 31.50 0.00 0.00 34.75
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.00 0.00 31.99 0.25 0.00 34.76
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.05 24.00 24.00 4.80
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 3.00 0.00 2.00 0.00 40.00 20.00 0.05 24.00 24.00 4.80
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    53.62 0.00 21.70 0.00 0.00 24.69
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    33.92 0.00 22.11 0.00 0.00 43.97
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    8.53 0.00 4.44 0.00 0.00 87.03
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    5.58 0.00 2.15 0.00 0.00 92.27
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.00 0.00 1.56 12.50 0.00 85.94
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 9.00 0.00 1.00 0.00 80.00 80.00 0.23 86.00 233.00 23.30
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 9.00 0.00 1.00 0.00 80.00 80.00 0.23 86.00 233.00 23.30
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    1.49 0.00 11.90 0.00 0.00 86.61
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.04 182.00 35.00 3.50
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 1.00 0.00 8.00 8.00 0.04 182.00 35.00 3.50
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.26 0.00 21.82 0.00 0.00 77.92
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    0.00 0.00 20.48 0.00 0.00 79.52
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    9.49 0.00 13.33 0.00 0.00 77.18
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    20.35 0.00 4.77 0.00 0.00 74.87
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    6.32 0.00 13.22 1.72 0.00 78.74
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 15302.97 0.99 161.39 7.92 34201.98 210.68 65.27 87.75 3.93 63.76
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 15302.97 0.99 161.39 7.92 34201.98 210.68 65.27 87.75 3.93 63.76
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    avg-cpu: %user %nice %system %iowait %steal %idle
    1.83 0.00 5.49 1.22 0.00 91.46
    Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util
    sda 0.00 21.00 0.00 95.00 0.00 91336.00 961.43 43.76 1003.00 7.18 68.20
    sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    sda2 0.00 21.00 0.00 95.00 0.00 91336.00 961.43 43.76 1003.00 7.18 68.20
    sda3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
    ===================

  • How to create a template for a Workflow

    I have created a workflow that has couple of dataflows inside it.
    Now, I need to create another 20 workflows which have the similar logic to that of the first workflow, except for couple of changes such as the source file, etc. Instead of creating the whole logic from the scratch for all the workflows, is there any way of using the first workflow as a template & create the 20 work flows quickly? 
    As per our req, i need to have different names corresponding to the different source files that am having.
    If i go with the option of replication of the data flows , i am facing the below issue:
    -> Change in the name of the dataflow/ change in any of the object within the dataflow (eg, changing the source file/ target table within a data flow)is also getting reflected in all the other data flows.
    I am looking out for a way wherein , if we make the change in the workflow/dataflow , it shouldn't change the original(template) workflow.
    A more detailed scenario.:
    I have a work flow, say WF_Microsoft to be created for pulling data from a source file "Microsoft"
    WF_Microsoft has two data flows DF_Microsoft1 -> DF_Microsoft2 (DF_Microsoft1 connected to DF_Microsoft2 )
    DF_Microsoft1 in turn has SourceTable_Microsoft as input & StagingTable_Microsoft as output
    DF_Microsoft2 in turn has StagingTable_Microsoft as input & TargetTable_Microsoft as output with some table comparison & query transformations in between them
    Now, i need to create Workflows for few other source files eg "SAP","Oracle" and so on with the same logic with the difference being that, the source file is different, the output/input within the dataflows are different tables. I am looking out for a solution where in i need not create the dataflows, drag the objects again & again for all the 20 files. But, instead should do it without much effort by just replacing the source & target tables.
    If i replicate, any change in the naming of the dataflow / change in the object is also reflected in the original one as well. But i need the original one to remain same.
    -Thanks,
    Naveen

    I have created a workflow that has couple of dataflows inside it.
    Now, I need to create another 20 workflows which have the similar logic to that of the first workflow, except for couple of changes such as the source file, etc. Instead of creating the whole logic from the scratch for all the workflows, is there any way of using the first workflow as a template & create the 20 work flows quickly? 
    As per our req, i need to have different names corresponding to the different source files that am having.
    If i go with the option of replication of the data flows , i am facing the below issue:
    -> Change in the name of the dataflow/ change in any of the object within the dataflow (eg, changing the source file/ target table within a data flow)is also getting reflected in all the other data flows.
    I am looking out for a way wherein , if we make the change in the workflow/dataflow , it shouldn't change the original(template) workflow.
    A more detailed scenario.:
    I have a work flow, say WF_Microsoft to be created for pulling data from a source file "Microsoft"
    WF_Microsoft has two data flows DF_Microsoft1 -> DF_Microsoft2 (DF_Microsoft1 connected to DF_Microsoft2 )
    DF_Microsoft1 in turn has SourceTable_Microsoft as input & StagingTable_Microsoft as output
    DF_Microsoft2 in turn has StagingTable_Microsoft as input & TargetTable_Microsoft as output with some table comparison & query transformations in between them
    Now, i need to create Workflows for few other source files eg "SAP","Oracle" and so on with the same logic with the difference being that, the source file is different, the output/input within the dataflows are different tables. I am looking out for a solution where in i need not create the dataflows, drag the objects again & again for all the 20 files. But, instead should do it without much effort by just replacing the source & target tables.
    If i replicate, any change in the naming of the dataflow / change in the object is also reflected in the original one as well. But i need the original one to remain same.
    -Thanks,
    Naveen

Maybe you are looking for