Is this a bug? (about analytic function)

create table emp (
empno varchar2(10),
deptno varchar2(10),
sal number(10)
insert into emp(empno,deptno,sal) values('1','10',101);
insert into emp(empno,deptno,sal) values('2','20',102);
insert into emp(empno,deptno,sal) values('3','20',103);
insert into emp(empno,deptno,sal) values('4','10',104);
insert into emp(empno,deptno,sal) values('5','30',105);
insert into emp(empno,deptno,sal) values('6','30',106);
insert into emp(empno,deptno,sal) values('7','30',107);
insert into emp(empno,deptno,sal) values('8','40',108);
insert into emp(empno,deptno,sal) values('9','30',109);
insert into emp(empno,deptno,sal) values('10','30',110);
insert into emp(empno,deptno,sal) values('11','30',100);
SELECT empno, deptno, sal,
last_value(sal)
OVER (PARTITION BY deptno order by sal desc) col1,
first_value(sal)
OVER (PARTITION BY deptno order by sal asc) col2,
first_value(sal)
OVER (PARTITION BY deptno order by sal desc) col3,
last_value(sal)
OVER (PARTITION BY deptno order by sal asc) col4
FROM emp
col2, col3 return what I expect.
I don't know why col1 and col4 return the these kinds of results.

Well... I learned something new today!
This is because you have not defined a windowing clause in your analytic query. If you do not specify a windowing clause, the default used is RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. So your query is equivalent to:
  1  select empno, deptno, sal,
  2  last_value(sal) over (
  3     partition by deptno
  4     order by sal desc
  5     range between unbounded preceding and current row
  6  ) col1,
  7  first_value(sal) over (
  8     partition by deptno
  9     order by sal asc
10     range between unbounded preceding and current row
11  ) col2,
12  first_value(sal) over (
13     partition by deptno
14     order by sal desc
15     range between unbounded preceding and current row
16  ) col3,
17  last_value(sal) over (
18     partition by deptno
19     order by sal asc
20     range between unbounded preceding and current row
21  ) col4
22* from emp
SQL> /
     EMPNO     DEPTNO        SAL       COL1       COL2       COL3       COL4
         4         10        104        104        101        104        104
         1         10        101        101        101        104        101
         3         20        103        103        102        103        103
         2         20        102        102        102        103        102
        10         30        110        110        100        110        110
         9         30        109        109        100        110        109
         7         30        107        107        100        110        107
         6         30        106        106        100        110        106
         5         30        105        105        100        110        105
        11         30        100        100        100        110        100
        11         30        100        100        100        110        100
         8         40        108        108        108        108        108
12 rows selected.What you need to do is specify the correct windowing clause as follows:
  1  select empno, deptno, sal,
  2  last_value(sal) over (
  3     partition by deptno
  4     order by sal desc
  5     range between unbounded preceding and unbounded following
  6  ) col1,
  7  first_value(sal) over (
  8     partition by deptno
  9     order by sal asc
10  ) col2,
11  first_value(sal) over (
12     partition by deptno
13     order by sal desc
14  ) col3,
15  last_value(sal) over (
16     partition by deptno
17     order by sal asc
18     range between unbounded preceding and unbounded following
19  ) col4
20* from emp
SQL> /
     EMPNO     DEPTNO        SAL       COL1       COL2       COL3       COL4
         4         10        104        101        101        104        104
         1         10        101        101        101        104        104
         3         20        103        102        102        103        103
         2         20        102        102        102        103        103
        10         30        110        100        100        110        110
         9         30        109        100        100        110        110
         7         30        107        100        100        110        110
         6         30        106        100        100        110        110
         5         30        105        100        100        110        110
        11         30        100        100        100        110        110
        11         30        100        100        100        110        110
         8         40        108        108        108        108        108
12 rows selected....which gives the expected results. Or just use max(), which is clearer anyway :)
This is documented on Metalink, bug number 5684819:
"If you omit the windowing_clause of the analytic_clause, it defaults to RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW. This default will return a value that sometimes may seem unexpected, because the last value in the window is at the bottom of the window, which is not fixed. It keeps changing as the current row changes. For expected results, specify the windowing clause as RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING. Another option, is to specify the windowing clause as RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING."
cheers,
Anthony

Similar Messages

  • Is this a bug about template??

    I made a template with 800*600 full-screen size background.
    When creating a project base on this template, Captivate guide me
    to record new slides. The problem was it usesd size of 780*580 for
    recording new slides and left some white spaces. If I canceled the
    operaction and manually click 'Record additional slides...'(I'm not
    sure what it is in English enviornment), it comes with the size
    what I respect, 800*600. How could I save such steps ?? I hope I
    could record correct size of slides when starting a new project
    base on template...
    I'm using Captivate 1.0.1290 (Traditional Chinese).

    The actual capture area for 800x600 full-screen is actually
    780x580 so what you are seeing is designed behavior, I think. The
    size difference allows room for the "chrome" to display in an
    actual 800x600 resolution monitor. I think you already knew all
    that ...
    So the problem really seems to be: Why is Captivate using two
    different sizes, depending on what menu is used to capture
    additional slides? It really seems like a bug to me - but all I can
    reference is the English version of Captivate.
    I think I would change my recording area selection, and just
    set the recording are to 780x580 instead of the "full-screen"
    option. In the meantime, you might want to report this as a bug -
    if you think it is.
    This
    is the link to the "Bug Report" if you need it.. Have a great
    day!

  • Is this a bug in decode function?????

    Hi,
    I'm trying run the following query and it blows with a ORA-01722. I'm dealing with the sample HR schema that comes with the database. The SQL is :
    select employee_id, start_date, end_date, department_id,
    decode(job_id, 'IT_PROG', 1,'AC_ACCOUNT', 2,'AC_MGR', 3,'MK_REP', 4,'ST_CLERK', 5,'AD_ASST', 6,'SA_REP', 7,'AC_ACCOUNT', 8,'other') Job_num, job_id
    from job_history
    blows up. This is what I get:
    ORA-01722: invalid number
    But strangely the below query works:
    select employee_id, start_date, end_date, department_id,
    decode(job_id, 'IT_PROG', '1','AC_ACCOUNT', '2','AC_MGR', '3','MK_REP', '4','ST_CLERK', '5','AD_ASST', '6','SA_REP', '7','AC_ACCOUNT', '8', 9999)Job_num, job_id
    from job_history     
    The result of the second query in a comma delimited format is as below:
    EMPLOYEE_ID,START_DATE,END_DATE,DEPARTMENT_ID,JOB_NUM,JOB_ID
    102,1/13/1993,7/24/1998,60,1,IT_PROG
    101,9/21/1989,10/27/1993,110,2,AC_ACCOUNT
    101,10/28/1993,3/15/1997,110,3,AC_MGR
    201,2/17/1996,12/19/1999,20,4,MK_REP
    114,3/24/1998,12/31/1999,50,5,ST_CLERK
    122,1/1/1999,12/31/1999,50,5,ST_CLERK
    200,9/17/1987,6/17/1993,90,6,AD_ASST
    176,3/24/1998,12/31/1998,80,7,SA_REP
    176,1/1/1999,12/31/1999,80,9999,SA_MAN
    200,7/1/1994,12/31/1998,90,2,AC_ACCOUNT
    Any idea why this happens? I'm going bonkers. Your help is greatly appreciated.
    Best Regards,
    Naveen.

    It's a feature :) Databases generally provide type coercion for common data types assuming that the user actually meant the same value but as a different type. Hence, '1' -> 1. However, you couldn't directly convert 'A' to a relevant number as it could mean 10 in hex, 65 as an ASCII value, etc.

  • Question about analytic function

    I have a table (t1) that contains text (source_ip) and a corresponding number column (attack_count). I'm trying to group source_ip, sum attack_count per unique source ip, and then provide a percentage for unique source_ip attack count to the total sum of attack count.
    For example (t1):
    source_ip attack_count
    text1 5
    text2 4
    text1 1
    My output would (should) look like:
    col.a col.b col.c
    text1 6 60.00
    text2 4 40.00
    This is what I've come up with so far:
    SELECT a.source_ip, ROUND(SUM(a.attack_count)/b.total_attack*100, 2) PERCENTAGE, sum(attack_count)
    FROM t1 a, (SELECT sum(attack_count) total_attack FROM t1) b
    where time between '01-AUG-05' and '01-SEP-05'
    GROUP BY a.source_ip, b.total_attack
    order by PERCENTAGE DESC
    The query runs, groups source_ip, and sums(attack_count) just fine but provides an invalid percentage. I can't quite figure out where I'm going wrong.
    Any ideas? Thanks!

    test@orcl> create table attacks ( source_ip varchar2(15), attack_count number );
    Table created.
    Elapsed: 00:00:00.09
    test@orcl> insert into attacks values ( 'text1', 5 );
    1 row created.
    Elapsed: 00:00:00.00
    test@orcl> insert into attacks values ( 'text2', 4 );
    1 row created.
    test@orcl> insert into attacks values ( 'text1', 1 );
    1 row created.
    test@orcl> commit;
    Commit complete.
    -- option1:
    test@orcl> select source_ip,
    2 sum(attack_count) sum_attack_count,
    3 round(sum(attack_count)/max(t.tot)*100,2) percent
    4 from attacks, ( select sum(attack_count) tot from attacks ) t
    5 group by source_ip
    6 order by percent desc;
    SOURCE_IP SUM_ATTACK_COUNT PERCENT
    text1 6 60
    text2 4 40
    2 rows selected.
    -- option2:
    test@orcl> select source_ip,
    2 sum_attack_count,
    3 round((sum_attack_count/sum(sum_attack_count) over ())*100, 2) per
    4 from
    5 (
    6 select source_ip,
    7 sum(attack_count) sum_attack_count
    8 from attacks
    9 group by source_ip
    10 )
    11 order by per desc;
    SOURCE_IP SUM_ATTACK_COUNT PER
    text1 6 60
    text2 4 40

  • Analytical Function in Oracle

    I have a situation where i have partitioned a record set. If in any of the partition on that recordset, the value of one field(field name- status) is '45' i need to order the result of that partition by- 'outdate' desc, 'receiveddate' desc and order the other partition by 'key' desc,' sequence' desc, 'outdate' desc.
    So the query looks like -
    select row_number() over (partition by key order by sequence) RowNo, key, seq, status, outdate, receivedate from table1 where .........
    order by ????
    RowNo Key Seq status outdate receiveddate
    1 200 0 24 9/13/2009 9/12/2009
    2 200 1 23 9/10/2009 9/09/2009
    3 200 2 24 9/09/2009 9/08/2009
    1 210 0 24 9/13/2009 9/12/2009
    2 210 1 * 45 * 9/09/2009 9/08/2009
    3 210 2 24 9/10/2009 9/09/2009
    So i need to get the query that will order the first set of partition by 'key' desc,' sequence' desc, 'outdate' desc and the Second set of partition (since the status of '45' exist in second partition) by 'outdate' desc, 'receiveddate' desc .
    The output of the query should look like
    RowNo Key Seq status outdate receiveddate
    1 200 0 24 9/13/2009 9/12/2009
    2 200 1 23 9/10/2009 9/09/2009
    3 200 2 24 9/09/2009 9/08/2009
    1 210 0 24 9/13/2009 9/12/2009
    2 210 2 24 9/10/2009 9/09/2009
    3 210 1 *45 * 9/09/2009 9/08/2009
    I am not sure if this is possible using Analytical function.
    I would really appreciate if any one can light me on this.
    Thanks in advance

    Hi,
    Welcome to the forum!
    You can use analytic functions in the ORDER BY clause.
    I don't have your tables, so I'll use scott.emp to illustrate.
    The following query sorts by deptno first. After that, the sort order for departments that contain at least one Salesman is:
    (a) job
    (b) ename
    Deptno=30 happens to be the only department with a Salesman, so it is the only one sorted as above.
    The other departements will be sorted by
    (a) sal
    (b) job
    SELECT       deptno
    ,       ename
    ,       job
    ,       sal
    FROM       scott.emp
    ORDER BY  deptno
    ,            CASE
              WHEN  COUNT ( CASE
                                   WHEN  job = 'SALESMAN'
                          THEN  1
                         END
                       ) OVER (PARTITION BY deptno) > 0
              THEN  ROW_NUMBER () OVER ( PARTITION BY  deptno
                                          ORDER BY        job
                                ,            ename
              ELSE  ROW_NUMBER () OVER ( PARTITION BY  deptno
                                          ORDER BY        sal
                                ,            job
           END
    ;Output:
    .   DEPTNO ENAME      JOB              SAL
            10 MILLER     CLERK           1300
            10 CLARK      MANAGER         2450
            10 KING       PRESIDENT       5000
            20 SMITH      CLERK            800
            20 ADAMS      CLERK           1100
            20 JONES      MANAGER         2975
            20 SCOTT      ANALYST         3000
            20 FORD       ANALYST         3000
            30 JAMES      CLERK            950
            30 BLAKE      MANAGER         2850
            30 ALLEN      SALESMAN        1600
            30 MARTIN     SALESMAN        1250
            30 TURNER     SALESMAN        1500
            30 WARD       SALESMAN        1250 
    In the small set of sample data you posted, the results you want can be obtained simply using
    ORDER BY  key
    ,         outdate     DESCI assume that's just a coincidence.
    If you need help, post some sample data that really requires looking at the status column to get the right results. Post the data in some executable form, such as CREATE TABLE and INSERT statements, olr, as Salim did, a WITH clause. (Perhaps you can just add or change a couple of rows in the sample data Salim already posted.)

  • Analytical function count(*) with order by Rows unbounded preceding

    Hi
    I have query about analytical function count(*) with order by (col) ROWS unbounded preceding.
    If i removed order by rows unbouned preceding then it behaves some other way.
    Can anybody tell me what is the impact of order by ROWS unbounded preceding with count(*) analytical function?
    Please help me and thanks in advance.

    Sweety,
    CURRENT ROW is the default behaviour of the analytical function if no windowing clause is provided. So if you are giving ROWS UNBOUNDED PRECEDING, It basically means that you want to compute COUNT(*) from the beginning of the window to the current ROW. In other words, the use of ROWS UNBOUNDED PRECEDING is to implicitly indicate the end of the window is the current row
    The beginning of the window of a result set will depend on how you have defined your partition by clause in the analytical function.
    If you specify ROWS 2 preceding, then it will calculate COUNT(*) from 2 ROWS prior to the current row. It is a physical offset.
    Regards,
    Message was edited by:
    henryswift

  • About FIRST_ROW analytic function; can anyone help?

    Hi everyone,
    Can anyone help me with this simple query?
    Let's suppose I have this query (the with clause contains some data):
    WITH T AS (
    SELECT 'TEST' as COL1, 1 as COL2, 'z' as COL3 FROM dual
    UNION ALL
    SELECT 'TEST', 2, 'y' FROM dual
    UNION ALL
    SELECT 'TEST', 2, 'h' FROM dual
    SELECT FIRST_VALUE(COL1) OVER (PARTITION BY COL1), COL2, COL3
      FROM T;I would like to have only the first row returned. I was thinking that with FIRST_VALUE it will be possible, but it returns 3 records.
    So can anyone help me to have only the first record returned?
    TEST     1     zThis is just a simple example. In reality I have thousands of records. I need to get only the first record based on the name (TEST in this example). We don't really care about the other columns.
    Thanks for your help,

    user13117585 wrote:
    I would like to have only the first row returned. I was thinking that with FIRST_VALUE it will be possible, but it returns 3 records. Analytic functions don't filter rows, they just calculate values from some part of the result set.
    Aggregating is the most efficient way of doing this query:
    SQL> WITH T AS (
      2  SELECT 'TEST' as COL1, 1 as COL2, 'z' as COL3 FROM dual
      3  UNION ALL
      4  SELECT 'TEST', 2, 'y' FROM dual
      5  UNION ALL
      6  SELECT 'TEST', 2, 'h' FROM dual
      7  )
      8  select col1
      9       , min(col2) col2
    10       , max(col3) keep (dense_rank first order by col2) col3
    11    from t
    12   group by col1
    13  /
    COL1       COL2 C
    TEST          1 z
    1 row selected.Regards,
    Rob.

  • Analytical functions approach for this scenario?

    Here is my data:
    SQL*Plus: Release 11.2.0.2.0 Production on Tue Feb 26 17:03:17 2013
    Copyright (c) 1982, 2010, Oracle.  All rights reserved.
    Connected to:
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
    With the Partitioning, OLAP, Data Mining and Real Application Testing options
    SQL> select * from batch_parameters;
           LOW         HI MIN_ORDERS MAX_ORDERS
            51        100          6          8
           121        200          1          5
           201       1000          1          1
    SQL> select * from orders;
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4905        154
            4899        143
            4925        123
            4900        110
            4936        106
            4901        103
            4911        101
            4902         91
            4903         91
            4887         90
            4904         85
            4926         81
            4930         75
            4934         73
            4935         71
            4906         68
            4907         66
            4896         57
            4909         57
            4908         56
            4894         55
            4910         51
            4912         49
            4914         49
            4915         48
            4893         48
            4916         48
            4913         48
            2894         47
            4917         47
            4920         46
            4918         46
            4919         46
            4886         45
            2882         45
            2876         44
            2901         44
            4921         44
            4891         43
            4922         43
            4923         42
            4884         41
            4924         40
            4927         39
            4895         38
            2853         38
            4890         37
            2852         37
            4929         37
            2885         37
            4931         37
            4928         37
            2850         36
            4932         36
            4897         36
            2905         36
            4933         36
            2843         36
            2833         35
            4937         35
            2880         34
            4938         34
            2836         34
            2872         34
            2841         33
            4889         33
            2865         31
            2889         30
            2813         29
            2902         28
            2818         28
            2820         27
            2839         27
            2884         27
            4892         27
            2827         26
            2837         22
            2883         20
            2866         18
            2849         17
            2857         17
            2871         17
            4898         16
            2840         15
            4874         13
            2856          8
            2846          7
            2847          7
            2870          7
            4885          6
            1938          6
            2893          6
            4888          2
            4880          1
            4875          1
            4881          1
            4883          1
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4879          1
            2899          1
            2898          1
            4882          1
            4877          1
            4876          1
            2891          1
            2890          1
            2892          1
            4878          1
    107 rows selected.
    SQL>The batch_parameters:
    hi - high count of lines in the batch.
    low - low count of lines in the batch.
    min_orders - number of minimum orders in the batch
    max_orders - number of maximum orders in the batch.
    The issue is to create an optimal size of batches for us to pick the orders. Usually, you have stick within a given low - hi count but, there is a leeway of around let's say 5%percent on the batch size (for the number of lines in the batch).
    But, for the number of orders in a batch, the leeway is zero.
    So, I have to assign these 'orders' into the optimal mix of batches. Now, for every run, if I don't find the mix I am looking for, then, the last batch could be as small as a one line one order too. But, every Order HAS to be batched in that run. No exceptions.
    I have a procedure that does 'sort of' this, but, it leaves non - optimal orders alone. There is a potential of the orders not getting batched, because they didn't fall in the optimal mix potentially missing our required dates. (I can write another procedure that can clean up after).
    I was thinking (maybe just a general direction would be enough), with what analytical functions can do these days, if somebody can come up with the 'sql' that gets us the batch number (think of it as a sequence starting at 1).
    Also, the batch_parameters limits are not hard and fast. Those numbers can change but, give you a general idea.
    Any ideas?

    Ok, sorry about that. That was just guesstimates. I ran the program and here are the results.
    SQL> SELECT SUM(line_count) no_of_lines_in_batch,
      2         COUNT(*) no_of_orders_in_batch,
      3         batch_no
      4    FROM orders o
      5   GROUP BY o.batch_no;
    NO_OF_LINES_IN_BATCH NO_OF_ORDERS_IN_BATCH   BATCH_NO
                     199                     4     241140
                      99                     6     241143
                     199                     5     241150
                     197                     6     241156
                     196                     5     241148
                     199                     6     241152
                     164                     6     241160
                     216                     2     241128
                     194                     6     241159
                     297                     2     241123
                     199                     3     241124
                     192                     2     241132
                     199                     6     241136
                     199                     5     241142
                      94                     7     241161
                     199                     6     241129
                     154                     2     241135
                     193                     6     241154
                     199                     5     241133
                     199                     4     241138
                     199                     6     241146
                     191                     6     241158
    22 rows selected.
    SQL> select * from orders;
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4905        154     241123
            4899        143     241123
            4925        123     241124
            4900        110     241128
            4936        106     241128
            4901        103     241129
            4911        101     241132
            4903         91     241132
            4902         91     241129
            4887         90     241133
            4904         85     241133
            4926         81     241135
            4930         75     241124
            4934         73     241135
            4935         71     241136
            4906         68     241136
            4907         66     241138
            4896         57     241136
            4909         57     241138
            4908         56     241138
            4894         55     241140
            4910         51     241140
            4914         49     241142
            4912         49     241140
            4915         48     241142
            4916         48     241142
            4913         48     241142
            4893         48     241143
            2894         47     241143
            4917         47     241146
            4919         46     241146
            4918         46     241146
            4920         46     241146
            2882         45     241148
            4886         45     241148
            2901         44     241148
            2876         44     241148
            4921         44     241140
            4891         43     241150
            4922         43     241150
            4923         42     241150
            4884         41     241150
            4924         40     241152
            4927         39     241152
            2853         38     241152
            4895         38     241152
            4931         37     241154
            2885         37     241152
            4929         37     241154
            4890         37     241154
            4928         37     241154
            2852         37     241154
            2843         36     241156
            2850         36     241156
            4932         36     241156
            4897         36     241156
            4933         36     241158
            2905         36     241156
            2833         35     241158
            4937         35     241158
            4938         34     241158
            2880         34     241159
            2872         34     241159
            2836         34     241158
            2841         33     241159
            4889         33     241159
            2865         31     241159
            2889         30     241150
            2813         29     241159
            2902         28     241160
            2818         28     241160
            4892         27     241160
            2884         27     241160
            2820         27     241160
            2839         27     241160
            2827         26     241161
            2837         22     241133
            2883         20     241138
            2866         18     241148
            2849         17     241161
            2871         17     241156
            2857         17     241158
            4898         16     241161
            2840         15     241161
            4874         13     241146
            2856          8     241154
            2847          7     241161
            2846          7     241161
            2870          7     241152
            2893          6     241142
            1938          6     241161
            4888          2     241129
            2890          1     241133
            2899          1     241136
            4877          1     241143
            4875          1     241143
            2892          1     241136
    ORDER_NUMBER LINE_COUNT   BATCH_NO
            4878          1     241146
            4876          1     241136
            2891          1     241133
            4880          1     241129
            4883          1     241143
            4879          1     241143
            2898          1     241129
            4882          1     241129
            4881          1     241124
    106 rows selected.As you can see, my code is a little buggy in that it may not have followed the strict batch_parameters. But, this is acceptable to be in the general area.

  • Is this a bug of Outlook 2007 about images displaying in signature?

    I've done many tests and researched on website or MS KB. But still got no solution.
    My issue is:
    I make a signature with images linking from website which can be easily accessed.
    I setup this signature in Outlook 2007, when I compose a new mail, and choose the signature I set. It won't show the images with a high percentage rate, meanwhile, I try to get into "Signature"-"Signature...", 
    Outlook2007 gets stuck, then you can not close Outlook or open Internet Explorer unless you kill the process of OUTLOOK.exe.
    1. Test are done under a clean XP system and Office 2007 standard fresh installed. Also there are some other staffs who help me test the signature that report the same issue on Office 2007.
    2. Images are rendered in 96dpi. They are all in very small size stored on website, can be easily accessed without network connctivity problem.
    3. The signature is made by simple HTML language not by Outlook signature setup wizard. but in this case,  you can ignore which method I use to create the signature. The images in signature can be displayed well in Outlook 2003 &
    2010. Also I have tried insert images using "link to file" in Outlook signature setup wizard, got same issue.
    4. Don't suggest me to store the images locally. These images should be updated after a period. You can not ask the company staffs to update these images manually by themselves. and even if the images are stored locally, the images won't be shown
    in Outlook 2007 with a high percentage rate.
    5. I've tried setup signature with or without an account profile, got same issue.
    6. I 've tried without an accout profile, just copy the signature file to Outlook signature folder, unplug the network cable, and "new mail" then load the signature, of course, the images won't be shown because network connection doesn't exist,
    and then when I try to get into "Signature"-"Signature...",  the Outlook interface also gets stuck. So I think Outlook 2007 may have some problem on detecting the network connectivity.
    7. It is not possible to upgrate the version of Office. Since Office 2007 isn't out of date and is powerful enough for us, no one want to pay more money just to avoid this issue.
    I don't know why I cannot upload a screenshot for troubleshooting. If needed. I can send a mail with screenshot attached.
    So far to now, I still get no solution, I think this is a bug of Outlook 2007, because the same signature, there is no problem on Outlook 2003 & 2010. Hope someone of MS staff can see this article and report to technical support center.
    I would appriciate anyone's kindly help but please consider that you understand what I am talking about. I don't want to waste time for each of us.
    thanks in advanced.

    What kind of problem about the display image in signature?
    How do you add the image into the message body when you send the message?
    Does it show correct through Web-based access?
    Outlook 2007 doesn't support some of Style Element, you may reference the link as below:
    http://www.campaignmonitor.com/css/
    Thanks.
    Tony Chen
    TechNet Community Support
    Thanks for your reply,  I know that some Style Elements won't be supported in Outlook, but this is not the reason.
    Please refer to my post previously, I post it but get no response.
    http://social.technet.microsoft.com/Forums/office/en-US/481170b1-f23f-4d46-9914-823326491846/is-this-a-bug-of-outlook-2007-about-images-displaying-in-signature?forum=outlook

  • How to achive this using analytical function-- please help

    version 10g.
    this code works just fine with my requirement. i am tyring to learn analytical functions and implement that in the below query. i tried using row_number ,
    but i could nt achive the desired results. please give me some ideas.
    SELECT c.tax_idntfctn_nmbr irs_number, c.legal_name irs_name,
           f.prvdr_lctn_iid
      FROM tax_entity_detail c,
           provider_detail e,
           provider_location f,
           provider_location_detail pld
    WHERE c.tax_entity_sid = e.tax_entity_sid
       AND e.prvdr_sid = f.prvdr_sid
       AND pld.prvdr_lctn_iid = f.prvdr_lctn_iid
       AND c.oprtnl_flag = 'A'
       AND c.status_cid = 2
       AND e.oprtnl_flag = 'A'
       AND e.status_cid = 2
       AND (c.from_date) =
              (SELECT MAX (c1.from_date)
                 FROM tax_entity_detail c1
                WHERE c1.tax_entity_sid = c.tax_entity_sid
                  AND c1.oprtnl_flag = 'A'
                  AND c1.status_cid = 2)
       AND (e.from_date) =
              (SELECT MAX (c1.from_date)
                 FROM provider_detail c1
                WHERE c1.prvdr_sid = e.prvdr_sid
                  AND c1.oprtnl_flag = 'A'
                  AND c1.status_cid = 2)
       AND pld.oprtnl_flag = 'A'
       AND pld.status_cid = 2
       AND (pld.from_date) =
              (SELECT MAX (a1.from_date)
                 FROM provider_location_detail a1
                WHERE a1.prvdr_lctn_iid = pld.prvdr_lctn_iid
                  AND a1.oprtnl_flag = 'A'
                  AND a1.status_cid = 2)thanks
    Edited by: new learner on May 24, 2010 7:53 AM
    Edited by: new learner on May 24, 2010 10:50 AM

    May be like this not tested...
    select *
    from
    SELECT c.tax_idntfctn_nmbr irs_number, c.legal_name irs_name,
    f.prvdr_lctn_iid, c.from_date as c_from_date, max(c.from_date) over(partition by c.tax_entity_sid) as max_c_from_date,
    e.from_date as e_from_date, max(e.from_date) over(partition by e.prvdr_sid) as max_e_from_date,
    pld.from_date as pld_from_date, max(pld.from_date) over(partition by pld.prvdr_lctn_iid) as max_pld_from_date
    FROM tax_entity_detail c,
    provider_detail e,
    provider_location f,
    provider_location_detail pld
    WHERE c.tax_entity_sid = e.tax_entity_sid
    AND e.prvdr_sid = f.prvdr_sid
    AND pld.prvdr_lctn_iid = f.prvdr_lctn_iid
    AND c.oprtnl_flag = 'A'
    AND c.status_cid = 2
    AND e.oprtnl_flag = 'A'
    AND e.status_cid = 2
    AND pld.oprtnl_flag = 'A'
    AND pld.status_cid = 2
    )X
    where c_from_date=max_c_from_date AND e_from_date =max_e_from_date AND
    pld_from_date=max_pld_from_date

  • How can we write this in analytical function..

    select a.employee_id,a.last_name,b.count from employees a, (select manager_id, count(manager_id) as count from employees group by manager_id) b where a.employee_id=b.manager_id;
    As per my requirement I need each manager name and no of employees reporting to him... above query works.. Could anybody help to write the same using analytic function.... Hw this same can be written in effiect way??? (quick performance)
    Please also share the link to download some doc to have good understanding of analytical function..
    Thanks in advance....

    are you trying to do a hierarchical type of query?
    select ename, count(ename) -1 numr_of_emps_under_this_mgr  from  emp
    connect by  empno =prior mgr
    group by ename
    order by count(ename) desc ;
    ENAME     NUMR_OF_EMPS_UNDER_THIS_MGR
    KING     13
    BLAKE     5
    JONES     4
    CLARK     1
    FORD     1
    SCOTT     1
    ADAMS     0
    TURNER     0
    MARTIN     0
    JAMES     0
    SMITH     0
    MILLER     0
    ALLEN     0
    WARD     0Here is the table structure I used (I think you can download it from oracle somewhere)
    CREATE TABLE EMP
      EMPNO     NUMBER(4)                           NOT NULL,
      ENAME     VARCHAR2(10 BYTE),
      JOB       VARCHAR2(9 BYTE),
      MGR       NUMBER(4),
      HIREDATE  DATE,
      SAL       NUMBER(7,2),
      COMM      NUMBER(7,2),
      DEPTNO    NUMBER(2)
    SET DEFINE OFF;
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7369, 'SMITH', 'CLERK', 7902, TO_DATE('12/17/1980 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        800, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    Values
       (7499, 'ALLEN', 'SALESMAN', 7698, TO_DATE('02/20/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1600, 300, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    Values
       (7521, 'WARD', 'SALESMAN', 7698, TO_DATE('02/22/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1250, 500, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7566, 'JONES', 'MANAGER', 7839, TO_DATE('04/02/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        2975, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    Values
       (7654, 'MARTIN', 'SALESMAN', 7698, TO_DATE('09/28/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1250, 1400, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7698, 'BLAKE', 'MANAGER', 7839, TO_DATE('05/01/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        2850, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7782, 'CLARK', 'MANAGER', 7839, TO_DATE('06/09/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        2450, 10);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7788, 'SCOTT', 'ANALYST', 7566, TO_DATE('12/09/1982 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        3000, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, HIREDATE, SAL, DEPTNO)
    Values
       (7839, 'KING', 'PRESIDENT', TO_DATE('11/17/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        5000, 10);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, COMM, DEPTNO)
    Values
       (7844, 'TURNER', 'SALESMAN', 7698, TO_DATE('09/08/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1500, 0, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7876, 'ADAMS', 'CLERK', 7788, TO_DATE('01/12/1983 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1100, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7900, 'JAMES', 'CLERK', 7698, TO_DATE('12/03/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        950, 30);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7902, 'FORD', 'ANALYST', 7566, TO_DATE('12/03/1981 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        3000, 20);
    Insert into EMP
       (EMPNO, ENAME, JOB, MGR, HIREDATE, SAL, DEPTNO)
    Values
       (7934, 'MILLER', 'CLERK', 7782, TO_DATE('01/23/1982 00:00:00', 'MM/DD/YYYY HH24:MI:SS'),
        1300, 10);
    COMMIT;

  • 2.1 EA Bug: group by auto complete generates group by for analytic function

    Hi,
    when using an analytic function in the sql text, sqldeveloper generates an automatic group by statement in the sql text.
    Regards,
    Ingo

    Personally, I don't want anything changed automatically EVER. The day you don't notice and you run a wrong statement, the consequences may be very costly (read: disaster).
    Can this be turned off all together? If there's a preference I didn't find, can this be left off by default?
    Thanks,
    K.

  • Really weird bug, About This Computer/Storage

    Hello,
    Today when I was looking trough About This Computer I found this weird bug:
    What I mean is that my drives are hidden abit under the other text to the right, I can't read what type my SSD is either.
    This is how it looks on my friends computer:
    Is it really supposed to look like that for me? I can't resize the window either.

    It should say "of type Sata"
    Do you have any idea why the pictures of the harddrives isnt centered with the text as it is on my friends?

  • Rectify this Analytical function

    Hi friends
    This query is similar to what I had asked few days ago. I am getting data in the following manner:
    Month | MOB | sum(NewAcct) | sum(ActiveAcct)
    082009 | 0 | 1407 | 1358
    082009 | 1 | 0 | 1277
    082009 | 2 | 0 | 1000
    092009 | 0 | 1308 | 1269
    092009 | 1 | 0 | 1150
    I have third column 'OpenAcct'. This column will retain MOB 0 value for each MOB for each month. The desired output will be
    Month | MOB | NewAcct | ActiveAcct | OpenAcct
    082009 | 0 | 1407 | 1358 | 1407
    082009 | 1 | 0 | 1277 | 1407
    082009 | 2 | 0 | 1000 | 1407
    092009 | 0 | 1308 | 1269 |1308
    092009 | 1 | 0 | 1150 | 1308
    I have written the following query to achieve this result:
    SELECT
    x.OpenMonth,
    x.MOB,
    SUM(NewAccounts),
    FIRST_VALUE(sum(NEWACCounts)) OVER (PARTITION BY x.openmonth ORDER BY MOB) AS OpenAcct,
    SUM(ActiveAccounts)
    from
    select
    openmonth,
    mob,
    newaccount,
    activeaccounts
    from table) x
    But the output is not what I expect!
    Please correct my query.

    You are correct in saying this Frank.
    GROUP BY function is there.
    SELECT
    x.OpenMonth,
    x.MOB,
    SUM(NewAccounts),
    FIRST_VALUE(sum(NEWACCounts)) OVER (PARTITION BY x.openmonth ORDER BY MOB) AS OpenAcct,
    SUM(ActiveAccounts)
    from
    select
    openmonth,
    mob,
    newaccount,
    activeaccounts
    from table) x
    group by
    x.OpenMonth,
    x.MOB
    Here's the target table structure:
    Month date;
    MOB number;
    TotalNewAccounts number;
    TotalOpenAcct Number;
    TotalActiveAccounts Number
    Here's the source table structure:
    openmonth date,
    mob number,
    newaccount number,
    activeaccounts number
    I hope this helps.

  • The Bug about 'DB_SECONDARY_BAD' still exists in BerkeleyDB4.8!

    The Bug about 'DB_SECONDARY_BAD' still exists in BerkeleyDB4.8?
    I'm sorry for my poor English, But I just cannot find anywhere else for help.
    Thanks for your patience first!
    I'm using BDB4.8 C++ API on Ubuntu 10.04, Linux Kernel 2.6.32-24-generic
    $uname -a
    $Linux wonpc 2.6.32-24-generic #43-Ubuntu SMP Thu Sep 16 14:17:33 UTC 2010 i686 GNU/Linux
    When I update(overwrite) a record in database, I may get a DB_SECONDARY_BAD exception,
    What's worse, This case doesn't always occures, it's random. So I think it probably a bug
    of BDB, I have seen many issues about DB_SECONDARY_BAD with BDB4.5,4.6...
    To reproduce the issue, I make a simplified test program from my real program.
    The data to be stroed into database is a class called 'EntryData', It's defined in db_access.h,
    where also defines some HighLevel API functions that hide the BDB calls, such as
    store_entry_data(), which use EntryData as its argument. The EntryData have a string-type
    member-data 'name' and a vector<string>-type mem-data 'labels', So store_entry_data() will
    put the real data of EntryData to a contiguous memory block. The get_entry_data() returns
    an EntryData builed up from the contiguous memory block fetched from database.
    The comlete test program is post following this line:
    /////////db_access.h////////////
    #ifndef __DB_ACCESS_H__
    #define __DB_ACCESS_H__
    #include <string>
    #include <vector>
    #include <db_cxx.h>
    class EntryData;
    //extern Path DataDir; // default value, can be changed
    extern int database_setup();
    extern int database_close();
    extern int store_entry_data(const EntryData&, u_int32_t = DB_NOOVERWRITE);
    extern int get_entry_data(const std::string&, EntryData*, u_int32_t = 0);
    extern int rm_entry_data(const std::string&);
    class DBSetup
    // 构造时调用database_setup, 超出作用域自动调用database_close .
    // 该类没有数据成员.
    public:
    DBSetup() {
    database_setup();
    ~DBSetup() {
    database_close();
    class EntryData
    public:
    typedef std::vector<std::string> LabelContainerType;
    EntryData() {}
    EntryData(const std::string& s) : name(s) {}
    EntryData(const std::string& s, LabelContainerType& v)
    : name(s), labels(v) {}
    EntryData(const std::string&, const char*[]);
    class DataBlock;
    // 直接从内存块中构建, mem指针将会从数据库中获取,
    // 它就是EntryData转化成的DataBlock中buf_ptr->buf的内容.
    EntryData(const void* mem_blk, const int len);
    ~EntryData() {};
    const std::string& get_name () const { return name; }
    const LabelContainerType& get_labels() const { return labels; }
    void set_name (const std::string& s) { name = s; }
    void add_label(const std::string&);
    void rem_label(const std::string&);
    void show() const;
    // get contiguous memory for all:
    DataBlock get_block() const { return DataBlock(*this); }
    class DataBlock
    // contiguous memory for all.
    public:
    DataBlock(const EntryData& data);
    // 引进一块内存作为 buf_ptr->buf 的内容.
    // 例如从数据库中获取结果
    DataBlock(void* mem, int len);
    // 复制构造函数:
    DataBlock(const DataBlock& orig) :
    data_size(orig.data_size),
    capacity(orig.capacity),
    buf_ptr(orig.buf_ptr) { ++buf_ptr->use; }
    // 赋值操作符:
    DataBlock& operator=(const DataBlock& oth)
    data_size = oth.data_size;
    capacity = oth.capacity;
    if(--buf_ptr->use == 0)
    delete buf_ptr;
    buf_ptr = oth.buf_ptr;
    return *this;
    ~DataBlock() {
    if(--buf_ptr->use == 0) { delete buf_ptr; }
    // data()函数因 Dbt 构造函数不支持const char*而被迫返回 char*
    // data() 返回的指针是应该被修改的.
    const char* data() const { return buf_ptr->buf; }
    int size() const { return data_size; }
    private:
    void pack_str(const std::string& s);
    static const int init_capacity = 100;
    int data_size; // 记录数据块的长度.
    int capacity; // 已经分配到 buf 的内存大小.
    class SmartPtr; // 前向声明.
    SmartPtr* buf_ptr;
    class SmartPtr
    friend class DataBlock;
    char* buf;
    int use;
    SmartPtr(char* p) : buf(p), use(1) {}
    ~SmartPtr() { delete [] buf; }
    private:
    std::string name; // entry name
    LabelContainerType labels; // entry labels list
    }; // class EntryData
    #endif
    //////db_access.cc/////////////
    #include <iostream>
    #include <cstring>
    #include <cstdlib>
    #include <vector>
    #include <algorithm>
    #include "directory.h"
    #include "db_access.h"
    using namespace std;
    static Path DataDir("~/mydict_data"); // default value, can be changed
    const Path& get_datadir() { return DataDir; }
    static DbEnv myEnv(0);
    static Db db_bynam(&myEnv, 0); // using name as key
    static Db db_bylab(&myEnv, 0); // using label as key
    static int generate_keys_for_db_bylab
    (Db* sdbp, const Dbt* pkey, const Dbt* pdata, Dbt* skey)
    EntryData entry_data(pdata->get_data(), pdata->get_size());
    int lab_num = entry_data.get_labels().size();
    Dbt* tmpdbt = (Dbt*) malloc( sizeof(Dbt) * lab_num );
    memset(tmpdbt, 0, sizeof(Dbt) * lab_num);
    EntryData::LabelContainerType::const_iterator
    lab_it = entry_data.get_labels().begin(), lab_end = entry_data.get_labels().end();
    for(int i = 0; lab_it != lab_end; ++lab_it, ++i) {
    tmpdbt[ i ].set_data( (void*)lab_it->c_str() );
    tmpdbt[ i ].set_size( lab_it->size() );
    skey->set_flags(DB_DBT_MULTIPLE | DB_DBT_APPMALLOC);
    skey->set_data(tmpdbt);
    skey->set_size(lab_num);
    return 0;
    //@Return Value: return non-zero at error
    extern int database_setup()
    const string DBEnvHome (DataDir + "DBEnv");
    const string dbfile_bynam("dbfile_bynam");
    const string dbfile_bylab("dbfile_bylab");
    db_bylab.set_flags(DB_DUPSORT);
    const u_int32_t env_flags = DB_CREATE | DB_INIT_MPOOL;
    const u_int32_t db_flags = DB_CREATE;
    rmkdir(DBEnvHome);
    try
    myEnv.open(DBEnvHome.c_str(), env_flags, 0);
    db_bynam.open(NULL, dbfile_bynam.c_str(), NULL, DB_BTREE, db_flags, 0);
    db_bylab.open(NULL, dbfile_bylab.c_str(), NULL, DB_BTREE, db_flags, 0);
    db_bynam.associate(NULL, &db_bylab, generate_keys_for_db_bylab, 0);
    } catch(DbException &e) {
    cerr << "Err when open DBEnv or Db: " << e.what() << endl;
    return -1;
    } catch(std::exception& e) {
    cerr << "Err when open DBEnv or Db: " << e.what() << endl;
    return -1;
    return 0;
    int database_close()
    try {
    db_bylab.close(0);
    db_bynam.close(0);
    myEnv.close(0);
    } catch(DbException &e) {
    cerr << e.what();
    return -1;
    } catch(std::exception &e) {
    cerr << e.what();
    return -1;
    return 0;
    // 返回Dbt::put()的返回值
    int store_entry_data(const EntryData& e, u_int32_t flags)
    int res = 0;
    try {
    EntryData::DataBlock blk(e);
    // data()返回的buf中存放的第一个字符串便是 e.get_name().
    Dbt key ( (void*)blk.data(), strlen(blk.data()) + 1 );
    Dbt data( (void*)blk.data(), blk.size() );
    res = db_bynam.put(NULL, &key, &data, flags);
    } catch (DbException& e) {
    cerr << e.what() << endl;
    throw; // 重新抛出.
    return res;
    // 返回 Db::get()的返回值, 调用成功时 EntryData* e的值才有意义
    int get_entry_data
    (const std::string& entry_name, EntryData* e, u_int32_t flags)
    Dbt key( (void*)entry_name.c_str(), entry_name.size() + 1 );
    Dbt data;
    data.set_flags(DB_DBT_MALLOC);
    int res = db_bynam.get(NULL, &key, &data, flags);
    if(res == 0)
    new (e) EntryData( data.get_data(), data.get_size() );
    free( data.get_data() );
    return res;
    int rm_entry_data(const std::string& name)
    Dbt key( (void*)name.c_str(), name.size() + 1 );
    cout << "to remove: \'" << name << "\'" << endl;
    int res = db_bynam.del(NULL, &key, 0);
    return res;
    EntryData::EntryData(const std::string& s, const char* labels_arr[]) : name(s)
    {   // labels_arr 需要以 NULL 结尾.
    for(const char** i = labels_arr; *i != NULL; i++)
    labels.push_back(*i);
    EntryData::EntryData(const void* mem_blk, const int len)
    const char* buf = (const char*)mem_blk;
    int consumed = 0; // 已经消耗的mem_blk的大小.
    name = buf; // 第一串为 name
    consumed += name.size() + 1;
    for (string label = buf + consumed;
    consumed < len;
    consumed += label.size() + 1)
    label = buf + consumed;
    labels.push_back(label);
    void EntryData::add_label(const string& new_label)
    if(find(labels.begin(), labels.end(), new_label)
    == labels.end())
    labels.push_back(new_label);
    void EntryData::rem_label(const string& to_rem)
    LabelContainerType::iterator iter = find(labels.begin(), labels.end(), to_rem);
    if(iter != labels.end())
    labels.erase(iter);
    void EntryData::show() const {
    cout << "name: " << name << "; labels: ";
    LabelContainerType::const_iterator it, end = labels.end();
    for(it = labels.begin(); it != end; ++it)
    cout << *it << " ";
    cout << endl;
    EntryData::DataBlock::DataBlock(const EntryData& data) :
    data_size(0),
    capacity(init_capacity),
    buf_ptr(new SmartPtr(new char[init_capacity]))
    pack_str(data.name);
    for(EntryData::LabelContainerType::const_iterator \
    i = data.labels.begin();
    i != data.labels.end();
    ++i) { pack_str(*i); }
    void EntryData::DataBlock::pack_str(const std::string& s)
    int string_size = s.size() + 1; // to put sting in buf separately.
    if(capacity >= data_size + string_size) {
    memcpy(buf_ptr->buf + data_size, s.c_str(), string_size);
    else {
    capacity = (data_size + string_size)*2; // 分配尽可能大的空间.
    buf_ptr->buf = (char*)realloc(buf_ptr->buf, capacity);
    memcpy(buf_ptr->buf + data_size, s.c_str(), string_size);
    data_size += string_size;
    //////////// test_put.cc ///////////
    #include <iostream>
    #include <string>
    #include <cstdlib>
    #include "db_access.h"
    using namespace std;
    int main(int argc, char** argv)
    if(argc < 2) { exit(EXIT_FAILURE); }
    DBSetup setupup_mydb;
    int res = 0;
    EntryData ed(argv[1], (const char**)argv + 2);
    res = store_entry_data(ed);
    if(res != 0) {
         cerr << db_strerror(res) << endl;
         return res;
    return 0;
    // To Compile:
    // $ g++ -ldb_cxx -lboost_regex -o test_put test_put.cc db_access.cc directory.cc
    //////////// test_update.cc ///////////
    #include <iostream>
    #include <cstdlib>
    #include <string>
    #include <boost/program_options.hpp>
    #include "db_access.h"
    using namespace std;
    namespace po = boost::program_options;
    int main(int argc, char** argv)
    if(argc < 2) { exit(EXIT_SUCCESS); }
    DBSetup setupup_mydb;
    int res = 0;
    po::options_description cmdopts("Allowed options");
    po::positional_options_description pos_opts;
    cmdopts.add_options()
    ("entry", "Specify the entry that will be edited")
    ("addlabel,a", po::value< vector<string> >(),
    "add a label for specified entry")
    ("removelabel,r", po::value< vector<string> >(),
    "remove the label of specified entry")
    pos_opts.add("entry", 1);
    po::variables_map vm;
    store( po::command_line_parser(argc, argv).
    options(cmdopts).positional(pos_opts).run(), vm );
    notify(vm);
    EntryData entry_data;
    if(vm.count("entry")) {
    const string& entry_to_edit = vm["entry"].as<string>();
    res = get_entry_data( entry_to_edit, &entry_data );
    switch (res)
    case 0:
    break;
    case DB_NOTFOUND:
    cerr << "No entry named: \'"
    << entry_to_edit << "\'\n";
    return res;
    break;
    default:
    cerr << db_strerror(res) << endl;
    return res;
    } else {
    cerr << "No entry specified\n";
    exit(EXIT_FAILURE);
    EntryData new_entry_data(entry_data);
    typedef vector<string>::const_iterator VS_CI;
    if(vm.count("addlabel")) {
    const vector<string>& to_adds = vm["addlabel"].as< vector<string> >();
    VS_CI end = to_adds.end();
    for(VS_CI i = to_adds.begin(); i != end; ++i) {
    new_entry_data.add_label(*i);
    if(vm.count("removelabel")) {
    const vector<string>& to_rems = vm["removelabel"].as< vector<string> >();
    VS_CI end = to_rems.end();
    for(VS_CI i = to_rems.begin(); i != end; ++i) {
    new_entry_data.rem_label(*i);
    cout << "Old data| ";
    entry_data.show();
    cout << "New data| ";
    new_entry_data.show();
    res = store_entry_data(new_entry_data, 0); // set flags to zero permitting Over Write
    if(res != 0) {
    cerr << db_strerror(res) << endl;
    return res;
    return 0;
    // To Compile:
    // $ g++ -ldb_cxx -lboost_regex -lboost_program_options -o test_update test_update.cc db_access.cc directory.cc

    ////////directory.h//////
    #ifndef __DIRECTORY_H__
    #define __DIRECTORY_H__
    #include <string>
    #include <string>
    #include <sys/types.h>
    using std::string;
    class Path
    public:
    Path() {}
    Path(const std::string&);
    Path(const char* raw) { new (this) Path(string(raw)); }
    Path upper() const;
    void operator+= (const std::string&);
    // convert to string (char*):
    //operator std::string() const {return spath;}
    operator const char*() const {return spath.c_str();}
    const std::string& str() const {return spath;}
    private:
    std::string spath; // the real path
    inline Path operator+(const Path& L, const string& R)
    Path p(L);
    p += R;
    return p;
    int rmkdir(const string& path, const mode_t mode = 0744, const int depth = -1);
    #endif
    ///////directory.cc///////
    #ifndef __DIRECTORY_H__
    #define __DIRECTORY_H__
    #include <string>
    #include <string>
    #include <sys/types.h>
    using std::string;
    class Path
    public:
    Path() {}
    Path(const std::string&);
    Path(const char* raw) { new (this) Path(string(raw)); }
    Path upper() const;
    void operator+= (const std::string&);
    // convert to string (char*):
    //operator std::string() const {return spath;}
    operator const char*() const {return spath.c_str();}
    const std::string& str() const {return spath;}
    private:
    std::string spath; // the real path
    inline Path operator+(const Path& L, const string& R)
    Path p(L);
    p += R;
    return p;
    int rmkdir(const string& path, const mode_t mode = 0744, const int depth = -1);
    #endif
    //////////////////// All the code is above ////////////////////////////////
    Use the under command
    $ g++ -ldb_cxx -lboost_regex -o test_put test_put.cc db_access.cc directory.cc
    to get a test program that can insert a record to database.
    To insert a record, use the under command:
    $ ./test_put ubuntu linux os
    It will store an EntryData named 'ubuntu' and two labels('linux', 'os') to database.
    Use the under command
    $ g++ -ldb_cxx -lboost_regex -lboost_program_options -o test_update test_update.cc db_access.cc directory.cc
    to get a test program that can update the existing records.
    To update the record, use the under command:
    $ ./test_update ubuntu -r linux -a canonical
    It will update the with the key 'ubuntu', with the label 'linux' removed and a new
    label 'canonical'.
    Great thanks to you if you've read and understood my code!
    I've said that the DB_SECONDARY_BAD exception is random. The same operation may cause
    exception in one time and may goes well in another time.
    As I've test below:
    ## Lines not started with '$' is the stdout or stderr.
    $ ./test_put linux os linus
    $ ./test_update linux -r os
    Old data| name: linux; labels: os linus
    New data| name: linux; labels: linus
    $ ./test_update linux -r linus
    Old data| name: linux; labels: linus
    New data| name: linux; labels:
    dbfile_bynam: DB_SECONDARY_BAD: Secondary index inconsistent with primary
    Db::put: DB_SECONDARY_BAD: Secondary index inconsistent with primary
    terminate called after throwing an instance of 'DbException'
    what(): Db::put: DB_SECONDARY_BAD: Secondary index inconsistent with primary
    已放弃
    Look! I've received a DB_SECONDARY_BAD exception. But thus exception does not always
    happen even under the same operation.
    For the exception is random, you may have not the "luck" to get it during your test.
    So let's insert a record by:
    $ ./test_put t
    and then give it a great number of labels:
    $ for((i = 0; i != 100; ++i)); do ./test_update t -a "label_$i"; done
    and then:
    $ for((i = 0; i != 100; ++i)); do ./test_update t -r "label_$i"; done
    Thus, the DB_SECONDARY_BAD exception is almost certain to happen.
    I've been confused by the problem for times. I would appreciate if someone can solve
    my problem.
    Many thanks!
    Wonder

Maybe you are looking for

  • I need to unlock my ipad, lost passcode, I need  to know how to update safari so I can finish step 4 to back up.

    I have a apple iPad mini I bought new last year in November.  I don't know much of anything else except it has 16GB. There aren't any other problems besides me forgetting the passcode. I have tried to get help from apple support but seemed to go in c

  • Strange syslog from Prime Infrastructure 2.1

    Been getting alot of these syslog messages. I think perhaps a system job is failing but I don't know where to start to troubleshoot. 09 03 2014 11:23:13 [pi-ip-address] <LOC7:ERRR> 09/03/14 11:23:13.484 ERROR [jobmanager] [seqtaskexecutor-12371] ERRO

  • Confirmation Control

    Hello We have created a New Confirmation control copy 0002- Inbound Delivery and Rough GR. In the field of Inbound delivery we have assigned a new delivery type copy of Std IBD. But at the time of Vl31n (create IBD) its giving the error, "DOC NOT MAI

  • It is possible to get info from the PDF

    Hi Everyone, I'm new baby to acrobat using javascript. It is possible to get info from the PDF. The Info contain like Color space, Trim size, which font  used depend on pages and which color is used for image & text. Thanks in advance. -yajiv

  • N97 is now a paperweight

    Hi guys, wondering if anyone can help. Just done the firmware update on my N97 using NSU and it has bricked my lovely new phone. Anyone got any ideas of how to fix it without sending it back to Vodafone or Nokia? Upgrade process seemed to go ok, with