Increment in query

Hi ,
I have one column in which data is stored as follows
colParent
30
30
30
165
165
165
165
44
45
45
The Problem is I wanted to add one more row at run time called as colCounter which will be as
colParent | colCounter
30 1
30 2
30 3
165 1
165 2
165 3
165 4
44 1
45 1
45 2
Any help from you all genius ppl?

SQL> select * from temp;
C1
30
30
30
165
165
165
165
44
45
45
10 rows selected.
SQL> select c1,row_number() over(partition by c1 order by c1) s from temp;
C1                            S
165                           1
165                           2
165                           3
165                           4
30                            1
30                            2
30                            3
44                            1
45                            1
45                            2
10 rows selected.
SQL>

Similar Messages

  • Auto Increment column query

    I have a very simple table used for debugging:
    CREATE TABLE APPS.XX_DEBUG_TMP
      TEMP_VALUE  VARCHAR2(255 BYTE),
      TEMP_DATE   DATE
    )Then I can use it to store values as my pl/sql is processed - e.g.:
    INSERT INTO XX_DEBUG_TMP (TEMP_VALUE,TEMP_DATE) VALUES ('line 740 l_username value check:' || l_username,SYSDATE);  COMMIT;Trouble is that if a load of debug statements get processed with the same timestamp, I can't see which came first.
    Can I modify my table creation SQL to include an ID column which just increments for each row that is added to the table?
    I'm familiar with how to do it in MySQL (sorry - I know this is an Oracle forum - but am just putting this here to show what I mean):
    CREATE TABLE  `XX_DEBUG_TMP` (
    `TEMP_ID` MEDIUMINT( 10 ) NOT NULL AUTO_INCREMENT PRIMARY KEY ,
    `TEMP_VALUE` VARCHAR( 255 ) NOT NULL ,
    `TEMP_DATE` DATETIME NOT NULL
    ) ENGINE = MYISAM ;Is it that simple with Oracle? Probably not!
    Any advice much appreciated.
    Thanks

    There is no auto increment column in Oracle. However, you can create a sequence.
    CREATE TABLE APPS.XX_DEBUG_TMP
      TEMP_ID     NUMBER NOT NULL PRIMARY KEY,
      TEMP_VALUE  VARCHAR2(255 BYTE),
      TEMP_DATE   DATE
    CREATE SEQUENCE APPS.XX_DEBUG_TMP_SEQ;Then in your insert statement do this:
    INSERT INTO XX_DEBUG_TMP (TEMP_ID,TEMP_VALUE,TEMP_DATE) VALUES (APPS.XX_DEBUG_TMP_SEQ.NEXTVAL,'line 740 l_username value check:' || l_username,SYSDATE);  Another possible solution to your problem would be to use a TIMESTAMP data type instead of a DATE data type. It has fractional second resolution (up to 9 places I believe).

  • Data types related to incrementing a query column value

    ColdFusion: 9.0.1
    Server OS: Windows 7
    Web Server: Apache 2.2.21
    Database: SQL Server Express
    I have been returning numeric values from the database for a while.  Some I have left as straight values and some I have used in other calculations.  Not sure if this is related, but after upgrading to ColdFusion 9.0.1 I experienced an unexpected data conversion (at least that seems to be what is happening).  I am returning a value from a database (column datatype is integer) and incrementing it by 1.
    Code:
    <cfquery name="qry" datasource="DATASOURCE">
         SELECT MAX(Number)
         FROM MyTable
    </cfquery>
    <cfset myNumber = qry.Number + 1 />
    I am expecting this to return a whole number (integer).  However, it seems that I am getting a double now.  This I discovered by dumping out the suspect variable's class:
    Code:
    <cfdump var="#myNumber.getClass()#" />
    I was able to fix the issue by converting the variable to an integer using javaCast(), but I am wondering if this is a change/issue with ColdFusion 9.0.1 or something else, because this has worked just fine before.

    I doubt the behavior is new. CF usually converts to "double" implicitly whenever you apply mathematical operations (+,-, *, ...). So I am not sure what effect that has on your code. Can you elaborate?
       SELECT MAX(Number)
    As an aside, I am not sure what the code is doing exactly but SELECT MAX(...) is not inherently thread safe. Is there a reason you need to do this manually as opposed to using an auto-incrementing identity column?
    -Leigh

  • Insert one table and update another???

    Hello,
    I was wondering if it is possible to insert a record in one table and update another another record in a different table.
    I have a form on my company's intranet that allows employees to add comments (ADDT Insert transaction) about new products we are going to bring to the market. At the same time, I would like to count the numbers of comments on a particular product and update that number for each product to see which product is getting the most reviews.
    Right now the products are on the homepage with the title and inserted date. From there, the employees click on the product get the info and make comments about it. The problem for me is that I would like to see the comment counts for each product on the home page, which means I would have to update the product table with the count.
    Sorry, I am using PHP as the technology.
    When I used to do it in ASP, I would insert the comment using the POST from the form, but add another hidden field with the count in it and I would use the "Command" Server behavior to retrieve the number and update the other table field.
    I noticed that dreamweaver removed the "Command" server behavior when using PHP.
    All help is greatly appreciated
    Charles.

    Hi Charles,
    you can generally execute queries on a different table using Custom Triggers, and in regards to your "update the product table with the count" scenario the ADDT help file has a helpful pointer in the chapter "Custom transactions and triggers : Save additional information on login" -- in here you´ll find a sample "incremental counter" query which should be easy to adapt.
    Cheers,
    Günter Schenk
    Adobe Community Expert, Dreamweaver

  • Offline data with mobile service - How to limit data download

    Hi to all,
    I have to manage offline data with mobile service.
    I have a table with a lot of data but I would like to download data range by range,
    let say 100 records for time.
    Is it possible to do this with PullAsyn method?
    I noticed that I'm not able to use skip function at the query parameter in the PullAsync method.
    Infact if I try to use it in the query 
    await mytable.PullAsync("mykey",mytable.Where(r => r.userid == iUserId).Skip(iSkip).Take(iTake));
    I receive this error:
    Incremental pull query must not have skip or top specified
    Any help is appreciated.
    Thanks
    Daniele

    Right now, the implementation of PullAsync() in the .NET SDK, will always try to continue paging until the end. So this will download the initial range, but then continue on with everything after as well (updating skip until no more records are found)
    So you would be better off (probably) in letting the inc sync logic handle the paging for you. 
    To come up with an appropriate workaround, what issue are you running into letting inc sync fetch all the pages sequentially in the background?

  • Oracle 11g/R2 Query Result Cache - Incremental Update

    Hi,
    In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
    STEP - 1
    SELECT      /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME     SUM(SALARY)
    202           Pat           Fay          6000
    201           Michael           Hartstein     13000
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation           | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT      | | 2 | 130 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE      | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY      | | 2 | 130 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL     | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------     Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    *690* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    2 rows processed
    STEP - 2
    INSERT INTO HR.employees_copy
    VALUES(200, 'Dummy', 'User','[email protected]',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
    STEP - 3
    SELECT      /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME SUM(SALARY)
    202      Pat      Fay      6000
    201      Michael      Hartstein      13000
    200      Dummy User      5000
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT |          | 3 | 195 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
         Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    *714* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
    Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
    If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
    Regards,
    Wasif
    Edited by: 965300 on Oct 15, 2012 12:25 AM

    965300 wrote:
    Hi,
    In Oracle 11g/R2, I created replica of HR.Employees table & executed the following statement (+Although using SUM() function is non-logical in this case, but just testifying the result+)
    STEP - 1
    SELECT      /+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)*
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME     SUM(SALARY)
    202           Pat           Fay          6000
    201           Michael           Hartstein     13000
    Elapsed: 00:00:00.01
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation           | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT      | | 2 | 130 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE      | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY      | | 2 | 130 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL     | EMPLOYEES_COPY | 2 | 130 | 3 (0)| 00:00:01 |
    --------------------------------------------------------------------------------------------------     Statistics
    0 recursive calls
    0 db block gets
    0 consistent gets
    0 physical reads
    0 redo size
    *690* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    2 rows processed
    STEP - 2
    INSERT INTO HR.employees_copy
    VALUES(200, 'Dummy', 'User','[email protected]',NULL, sysdate, 'MANAGER',5000, NULL,NULL,20);
    STEP - 3
    SELECT      /*+ RESULT_CACHE */ employee_id, first_name, last_name, SUM(salary)
    FROM           HR.Employees_copy
    WHERE      department_id = 20
    GROUP BY      employee_id, first_name, last_name;
    EMPLOYEE_ID      FIRST_NAME LAST_NAME SUM(SALARY)
    202      Pat      Fay      6000
    201      Michael      Hartstein      13000
    200      Dummy User      5000
    Elapsed: 00:00:00.03
    Execution Plan
    Plan hash value: 3837552314
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
    | 0 | SELECT STATEMENT |          | 3 | 195 | 4 (25)| 00:00:01 |
    | 1 | RESULT CACHE | 3acbj133x8qkq8f8m7zm0br3mu | | | | |
    | 2 | HASH GROUP BY | | 3 | 195 | 4 (25)| 00:00:01 |
    |* 3 | TABLE ACCESS FULL| EMPLOYEES_COPY | 3 | 195 | 3 (0)| 00:00:01 |
         Statistics
    0 recursive calls
    0 db block gets
    4 consistent gets
    0 physical reads
    0 redo size
    *714* bytes sent via SQL*Net to client
    416 bytes received via SQL*Net from client
    2 SQL*Net roundtrips to/from client
    0 sorts (memory)
    0 sorts (disk)
    3 rows processed
    In the execution plan of STEP-3, against ID-1 the operation RESULT CACHE is shown which shows the result has been retrieved directly from Result cache. Does this mean that Oracle Server has Incrementally Retrieved the resultset?
    Because, before the execution of STEP-2, the cache contained only 2 records. Then 1 record was inserted but after STEP-3, a total of 3 records was returned from cache. Does this mean that newly inserted row is retrieved from database and merged to the cached result of STEP-1?
    If Oracle server has incrementally retrieved and merged newly inserted record, what mechanism is being used by the Oracle to do so?
    Regards,
    Wasif
    Edited by: 965300 on Oct 15, 2012 12:25 AMNo, the RESULT CACHE operation doesn't necessarily mean that the results are retrieved from there. It could be being
    written to there.
    Look at the number of consistent gets: it's zero in the first step (I assume you had already run this query before) and I would
    conclude that the data is being read from the result cache.
    In the third step there are 4 consistent gets. I would conclude that the data is being written to the result cache, a fourth step repeating
    the SQL should show zero consistent gets and that would be the results being read.

  • Auto increment field with reset in a query

    Hi,
    I have the following query, I want 'field number' should count as a auto increment number and reset accordingly a group. e.g
    for the group 1 if there are 10 rows then it should give me rownum as 1--10
    for the group 2 if therea are 5 rows then it should give me rownum as 1--5
    select d.drop_prog_id,
    a.opre_cms_oppty_id,
    c.more_tipo_revisione,
    decode(f.grdo_prog_id,16,null,
    decode(f.grdo_prog_id,75,'Other','Euro')
    ) Field_Number,
    decode(f.grdo_prog_id,16,e.doma_testo, f.grdo_descrizione) GROUP1,
    decode(f.grdo_prog_id,16,h.risp_valore,d.drop_valore_risposta) MILESTONE_RESPONSE,
    d.drop_valore_commento1,
    d.drop_valore_commento2
    from wrt.tr002_opportunita_revisioni a,
    wrt.tr005_opportunita_rev_moduli b,
    wrt.tr008_moduli_revisione c,
    wrt.tr006_domande_risposte_oppor d,
    wrt.tr009_moduli_domande g,
    wrt.tr010_domande e,
    wrt.tr012_gruppo_domande f,
    wrt.tr015_risposte h
    where b.ormo_opre_prog_id = a.opre_prog_id and c.more_prog_id in (1, 19) and
    d.drop_ormo_prog_id = b.ormo_prog_id
    and d.drop_modo_prog_id = g.modo_prog_id
    and g.modo_more_prog_id = c.more_prog_id and
    g.modo_doma_prog_id = e.doma_prog_id
    and e.doma_grdo_prog_id = f.grdo_prog_id
    and (f.grdo_prog_id in (76, 75)
    or (e.doma_testo like '%Are payment terms from invoice%' and
    f.grdo_prog_id = 16))
    and a.opre_ultima_revisione = 'S'
    and h.risp_prog_id = d.drop_risp_prog_id
    group by d.drop_prog_id,
    a.opre_cms_oppty_id,
    c.more_tipo_revisione,
    drop_prog_id,
    e.doma_testo,
    d.drop_valore_risposta,
    h.risp_valore,
    d.drop_valore_commento1,
    d.drop_valore_commento2,
    f.grdo_descrizione,
    f.grdo_prog_id
    order by a.opre_cms_oppty_id,
    min(f.GRDO_ORDINE) ,
    min(g.MODO_ORDINE)

    row_number() over (partition by <group> order by <whatever>)

  • MV Incremental Refresh on join query of remote database tables

    Hi,
    I am trying to create a MV with incremental refresh option on a join query with 2 tables of remote database.
    Created MV logs on 2 tables in the remote database.
    DROP MATERIALIZED VIEW LOG ON emp;
    CREATE MATERIALIZED VIEW LOG ON emp WITH ROWID;
    DROP MATERIALIZED VIEW LOG ON dept;
    CREATE MATERIALIZED VIEW LOG ON dept WITH ROWID;
    Now, trying to create the MV,
    CREATE MATERIALIZED VIEW mv_emp_dept
    BUILD IMMEDIATE
    REFRESH FAST
    START WITH SYSDATE
    NEXT SYSDATE1/(24*15)+
    WITH PRIMARY KEY
    AS
    SELECT e.ename, e.job, d.dname FROM emp@remote_db e,dept@remote_db d
    WHERE e.deptno=d.deptno
    AND e.sal>800;
    Getting ORA-12052 error.
    Can you please help me.
    Thanks,
    Anjan

    Primary Key is on EMPNO for EMP table and DEPTNO for DEPT table.
    Actually, I have been asked to do an feasibility test whether incremental refresh can be performed on MV with join query of 2 remote database tables.
    I've tried with all combinations of ROWID and PRIMARY KEY, but getting different errors. From different links, I found that it's possible, but cannot create any successful testcase anyway.
    It will be very much helpful if you can correct my example or tell me the restrictions in this case.
    Thanks,
    Anjan

  • Query incremental size from previous incremental 1

    Hi All,
    Is it possible to query a view in the DB to find the incremental size in a DB?
    Trying to figure out how many blocks/data is changed between now and my previous incremental 0 or 1 backup (how big the next backup will be).
    Thanks,

    Pre rman & incrementals you used to go by the amount of redo / change vectors, although this is obv committed & uncommitted data for your point in time recovery.
    If you've already got a level0 and a level1, you should know the delta / backup piece size by checking rc_backup_set. If you reconcile this against the number of redo changes between level0 & level1, it might give you a vague steer in regards backup piece sizings (although TBH there's no substitute for just running it & finding out!) so that you could estimate the next delta
    rc_backup_piece & v$log_history where first_time between the level0 & level1. Would need testing though!!

  • Pre/Post increment query

    I have a query regarding Pre/Post increment.
    The following code:
    int x = 0;
    int y = 0;
    x += y++;
    System.out.println("x = " + x);
    System.out.println("y = " + y);
    Produces the following output:
    x = 0
    y = 1
    (Value of y is assigned to x, y is then post incremented).
    This makes sense to me. However the following code,
    the value of x does not get incremented. Can someone
    explain this to me?
    The following code:
    int x = 0;
    x += x++;
    System.out.println("x = " + x);
    Produces the following output:
    x = 0
    (Value of x is assigned to x, then x does not get post incremented??).

    I code with the mind set of "If the code is not self explanatory or obvious, then
    it's too confusing and should be avoided".
    I am studying for the Java Programmer Certification exam, so this query was more
    to try and understand why the output did not match what I had expected.
    So
    x += x++;
    is the same as (where x' refers to the original value of x)
    x = (x) + (x++);
    which is then executed in the following order:
    x = x++; //(x = x + 1) post incrementation
    x = x' + x'; //(x = 0 + 0) evalutation of the assignment
    Thanks for all the responses!!

  • Query for inserting data into table and incrementing the PK.. pls help

    I have one table dd_prohibited_country. prohibit_country_key is the primary key column.
    I have to insert data into dd_prohibited_country based on records already present.
    The scenario I should follow is:
    For Level_id 'EA' and prohibited_level_id 'EA' I should retreive the
    max(prohibit_country_key) and starting from the maximum number again I have to insert them
    into dd_prohibited_country. While inserting I have to increment the prohibit_country_key and
    shall replace the values of level_id and prohibited_level_id.
    (If 'EA' occurs, I have to replace with 'EUR')
    For Instance,
    If there are 15 records in dd_prohibited_country with Level_id 'EA' and prohibited_level_id 'EA', then
    I have to insert these 15 records starting with prohibit_country_key 16 (Afetr 15 I should start inserting with number 16)
    I have written the following query for this:
    insert into dd_prohibited_country
    select     
         a.pkey,
         b.levelid,
         b.ieflag,
         b.plevelid
    from
         (select
              max(prohibit_country_key) pkey
         from
              dd_prohibited_country) a,
         (select
    prohibit_country_key pkey,
              replace(level_id,'EA','EUR') levelid,
              level_id_flg as ieflag,
              replace(prohibited_level_id,'EA','EUR') plevelid
         from
              dd_prohibited_country
         where
              level_id = 'EA' or prohibited_level_id = 'EA') b
    My problem here is, I am always getting a.pkey as 15, because I am not incrementing it.
    I tried incrementing it also, but I am unable to acheive it.
    Can anyone please hepl me in writing this query.
    Thanks in advance
    Regards
    Raghu

    Because you are not incrementing your pkey. Try like this.
    insert
       into dd_prohibited_country
    select a.pkey+b.pkey,
         b.levelid,
         b.ieflag,
         b.plevelid
       from (select     max(prohibit_country_key) pkey
            from dd_prohibited_country) a,
         (select     row_number() over (order by prohibit_country_key)  pkey,
              replace(level_id,'EA','EUR') levelid,
              level_id_flg as ieflag,
              replace(prohibited_level_id,'EA','EUR') plevelid
            from     dd_prohibited_country
           where level_id = 'EA' or prohibited_level_id = 'EA') bNote: If you are in multiple user environment you can get into trouble for incrementing your PKey like this.

  • Query on Increment Process

    Our customer wishes to handle the increment process in the following manner:
    1. For employees, who have put in MORE than 1 yr of service in company as on 1st April, the increment amount to be processed through the Annual Increment process. In this method, the increment will be effective from 1 April but the actual process may be executed in April or one of the subsequent months (i.e with retro effect).
    2. For employees who have LESS than 1 yr service in company, the Annual Increment process won't apply. Instead, their increment should be processed on the day they complete one year. E.g. if Joining Date = 25 May 2011, the first increment should be effective from 25 May 2012. This increment may be processed on 25 May 2012 itself or on any date after this (retro effect). When the Annual Increment process is again executed on 1 April 2013 (i.e. the subsequent year), the increment amount for these employees should be pro-rated, based on the date they complete one of yr of service (e.g. from 25 May 2012 to 31 March 2012, in the above example). From the next financial year (i.e. 1 April 2014) onwards, these employees will get full increment through the normal Annual increment Processing (without any pro-ration).
    Is it possible to handle the above requirement directly through the standard Increment Process available in SAP? Or, how should we go about this?
    -Shambvi

    Hi
    Please refer the document on thie following Tx code HRPBSIN_SALARY_INCRT

  • How to query the tablespace size increment speed?

    Hello,
    Do you have any directions or ideas on how to know the tablespace size increments history. or how to know the frequency of tablespace size increment.
    when I went into new database, I never managed this database before, i want to know the tabelspace size increments speed for do the reasonable space extend.
    so could you please give us a guide thanks!

    RLUO wrote:
    Hello,
    Do you have any directions or ideas on how to know the tablespace size increments history. or how to know the frequency of tablespace size increment.
    when I went into new database, I never managed this database before, i want to know the tabelspace size increments speed for do the reasonable space extend.
    so could you please give us a guide thanks!Hi, there good way is using DBMS_SPACE.OBJECT_GROWTH_TREND as Rajesh mentioned that.But also you can use some AWR views to getting this information.Seee below link.
    tablespaces' growth trend

  • Incremental backup time query?

    Database size = 270GB
    Time taken for full level 0 backup = 50 minutes (Day 1)
    Time taken for cumulative level 1 backup = 70 minutes (Day 2)
    Time taken for differental level 1 backup = 55 minutes (Day 3)
    My question is how come the incremental backup is taking more time than the full backup time?
    Please advice.
    Sami Malik
    [email protected]

    Sami Malik wrote:
    Yes incremental backup is taken on the same time when there is heavy load on the system. For sure is this the reason? No, not for sure. We haven't seen enough info to say for sure, but given what we have been shown, it is a reasonable first guess. To know for sure, you need to do some actual performance analysis during both full and incremental backups and compare the two. STATSPACK is always a good place to start.
    I will try the incremental backup at night when there is nobody connected.
    Even though online backups <i>can</i> be taken anytime, it's always a good idea to schedule them during low-usage times.
    But one more question when in case if there is a 24/7 system, then its no use of taking incremental backup right?Why would you think that? If your system is 24/7, that's even <i><b>more</b></i> reason to be taking incrementals. If you do have to perform a recovery, having incrementals will shorten your recovery time, vs. haveing to apply all archived redo back to the last full backup.
    Also, you should be taking regular full backups as well. Most people take a full backup weekly, and an incremental on the other 6 days of the week.
    As an additional thought .... while it is a curiosity that that your incremental took longer than the full and much could be learned by chasing it down, you always have to ask "is this a problem that needs to be solved?" If you schedule all backups (full and incremental) during low-use times, and the users do not notice any impact, do you really care if the incremental took longer than the full? On the other hand, it may <i>not</i> be a problem that "needs" to be solved, but running it down may reveal other problems that are impacting or have real potential to impact the business, and <i>should</i> be solved.

  • What does it mean when the usecounts of Parse Tree for a view is incrementing when a select query is issued against the view?

    I'm using SQL Server 2008 R2 (10.50.4033) and I'm troubleshooting an issue that a select query against a specific view is taking more than 30 seconds consistently.   The issue just starts happening this week and there is no mass changes in data.  
    The problem only occur if the query is issued from an IIS application but not from SSMS.  One thing I noticed is that sys.dm_exec_cached_plans is returning 2 Parse Tree rows for the view -  one created when the select query is issued
    1st time from the IIS application and another one created when the same select query is issued 1st time from SSMS.   The usecounts of the Parse Tree row for the view (the IIS one) is increasing whenever the select query is issued.  The
    usecounts of the Parse Tree row for the view (the SSMS one) does not increase when the select query is issued again. 
    There seems to be a correlation between the slowness of the query and the increasing of the usecounts of the Parse Tree row for the view.  
    I don't know why there is 2 Parse Tree rows for the view.  There is also 2 Compiled Plan rows for the select query.  
    What does the Parse Tree row mean especially the usecounts column?

    >> The issue just starts happening this week and there is no mass changes in data.  
    There might be a mass changes in the execution plan for several reason without mass changes in data
    If you have the old version and a way to check the old execution plan, and compare to the new one, that this should be your starting point. In most cases you don't have this option and we need to monitor from scratch.
    >> The problem only occur if the query is issued from an IIS application but not from SSMS.
    This mean that we know exactly what is the different and you can compare both execution plan. once you do it, you will find that they are no the same. But this is very common issue and we can know that it is a result of different SETting while connecting
    from different application. SSMS is an external app like any app that you develop in Visual studio but the SSMS dose not use the Dot.Net default options.
    Please check this link, to find the full explanation and solutions:
    http://www.sommarskog.se/query-plan-mysteries.html
    Take a look at sys.dm_exec_sessions for your ASP.Net application and for your SSMS session.
    If you need more specific help, then we need more information and less stories :-)
    We need to see the DDL+DML+Query and both execution plans
    >> What does the Parse Tree row mean
    I am not sure what you mean but the parse tree represents the logical steps necessary to execute the query that has been requested. you can check this tutorial about the execution plan: https://www.simple-talk.com/sql/performance/execution-plan-basics/ or
    this one: http://www.developer.com/db/understanding-a-sql-server-query-execution-plan.html
    >> regarding the usecount column or any other column check this link:
    https://msdn.microsoft.com/en-us/library/ms187404.aspx?f=255&MSPPError=-2147217396.
      Ronen Ariely
     [Personal Site]    [Blog]    [Facebook]

Maybe you are looking for

  • HT4623 How can i update my iphone 3gs from ios 4.2.1 to ios 6.1

    Already tried to sync with itunes and was advised when selecting check for update that current ios is 4.2.1. ios 5 or higher is needed for most apps on app store.

  • Credit card payment not cleared

    Hi all, User processed the credit card payment on jun 11/14 but in the sap still it is showing as open item please give me your suggestions what would be the reason,is there any thing we need to check in the sap please advise

  • Obiee 11g How to let user change password

    obiee 11g How to let user change password ? i not mean use weblogic console. normal user how to change password.

  • Drop Shadows using Type Tool

    Is there any way of changing the 'intensity' or 'angle' of drop shadows? I am of course using Layer Styles/drop shadow, but can the angle/intensity be changed. Thanks

  • Parent account on historical pipeline analysis

    Hi, I've a little problem with Oracle CRM on Demand Answers: When I use the subject area "historical pipeline" analysis I do not have acces to the parent account. There is an account area in which I have tons of fields but not the parent account. So