Horrible performance and millions of rows to compress.

Hello. I wrote a script to compress a table that's readings from a computer terminal. I read the original table then insert a row in a new table for the beginning of a new reading and then update a column for the and end time and amount of the duplicate readings. The compression is working fine but as I'm testing and increasing the rows the longer it is taking to run. There are 11,000,000 rows to compress and it's hanging after I try and do anything over 20,000. I don't know if it's hanging or taking a long time to start. I put in checks to see when it's running and committing and when I have a small amount of records it takes a minute then starts and run through fast. When I add a more records it just site there.
Any suggestions?
set serveroutput on;
set serveroutput on size 1000000;
truncate table dms_am_nrt_test;
declare
rel_counter number := 1;
row_counter number := 0;
curr_station number;
curr_rel_conc number := 0;
begin_date date;
station number;
test varchar2(5);
curr_index number;
trip number := 5000;
counter number := 1;
cursor c1 is Select *
     From
          gilbert_r.loop_test;
begin
select station_num into station from gilbert_r.loop_test
where rownum = 1;
curr_station := station;
for c1_rec in c1 loop
select station_num into station from gilbert_r.loop_test
where nrt_temp_id = c1_rec.nrt_temp_id;
if curr_station != c1_rec.station_num
then
row_counter := 0;
curr_station := station;
end if;
if c1_rec.rel_conc > 0 or row_counter = 0 or (curr_rel_conc > 0 and c1_rec.rel_conc = 0)
then
curr_rel_conc := c1_rec.rel_conc;
rel_counter := 1;
begin_date := c1_rec.begin_datetime;
select dms_am_nrt_seq.nextval into curr_index from dual;
Insert into site.dms_am_nrt_test
               (nrt_id,
                struc_record_id,
                begin_datetime,
                end_date_time,
                num_readings,
                station_num,
                port_num,
                port_loc,
                agent,
                abs_conc,
                rel_conc,
                units,
                height,
                area,
                rt,
                peak_width,
                 station_status,
                 alarm,
                error,
                error_code1,
                error_code2,
                error_code3,
                error_code4,
                error_code5,
                flow_rate)
               values
               (curr_index,
                c1_rec.Struc_Record_ID,
                begin_date,
                begin_date,
                 rel_counter,
                 c1_rec.Station_Num,
                 c1_rec.Port_Num,
                 c1_rec.Port_Loc,
            c1_rec.Agent,
                 c1_rec.Abs_Conc,
                 c1_rec.Rel_Conc,
                 c1_rec.Units,
                 c1_rec.Height,
                 c1_rec.Area,
                 c1_rec.RT,
                 c1_rec.Peak_Width,
                 trim(c1_rec.Station_Status),
                 c1_rec.Alarm,
                 trim(c1_rec.Error),
                 trim(c1_rec.Error_Code1),
                 trim(c1_rec.Error_Code2),
                 trim(c1_rec.Error_Code3),
                 trim(c1_rec.Error_Code4),
                 trim(c1_rec.Error_Code5),
                  c1_rec.Flow_Rate);
else if
c1_rec.rel_conc = 0 and curr_rel_conc = 0
then
rel_counter := rel_counter + 1;
curr_rel_conc := c1_rec.rel_conc;
begin_date := c1_rec.begin_datetime;
Update site.dms_am_nrt_test
               set
                    end_date_time = begin_date,
                    num_readings = rel_counter
               where
                    nrt_id = curr_index;
end if;
end if;
if counter = trip
then
commit;
trip := trip + 5000;
dbms_output.put_line('Commit');
end if;
row_counter := row_counter + 1;
counter := counter + 1;
end loop;
commit;
end;Message was edited by:
SightSeeker1
Message was edited by:
SightSeeker1
Message was edited by:
SightSeeker1

Hello
Well, after a bit of playing around, and looking at this ask tom article, I've come up with this:
create table dt_test_loop (station number, begin_time date, rel_conc number(3,2))
insert into dt_test_loop values(1, to_date('12:00','hh24:mi'), 0 );
insert into dt_test_loop values(1, to_date('12:01','hh24:mi'), 0);
insert into dt_test_loop values(1, to_date('12:02','hh24:mi'), 0);
insert into dt_test_loop values(1, to_date('12:03','hh24:mi'), 3.3);
insert into dt_test_loop values(1, to_date('12:04','hh24:mi'), 0);
insert into dt_test_loop values(2, to_date('12:00','hh24:mi'), 0 );
insert into dt_test_loop values(2, to_date('12:01','hh24:mi'), 0);
insert into dt_test_loop values(2, to_date('12:02','hh24:mi'), 0);
insert into dt_test_loop values(2, to_date('12:03','hh24:mi'), 4.2);
insert into dt_test_loop values(1, to_date('12:04','hh24:mi'), 0);
insert into dt_test_loop values(1, to_date('12:05','hh24:mi'), 0);
select
     station,
     begin_time,
     end_time,
     rel_conc,
     num_readings
FROM
     SELECT
          station,
          min(begin_time) over(partition by max_rn order by max_rn) begin_time,
          max(begin_time) over(partition by max_rn order by max_rn) end_time,
          rel_conc,
          count(*) over(partition by max_rn order by max_rn) num_readings,
          row_number() over(partition by max_rn order by max_rn) rn,
          max_rn
     FROM
          SELECT
               station,
               rel_conc,
               begin_time,
               max(rn) over(order by station,begin_time) max_rn
          FROM
               (select
                    rel_conc,
                    station,
                    begin_time,
                    case
                         when      rel_conc <> lag(rel_conc) over (order by station,begin_time) OR
                              station <> lag(station ) over (order by station,begin_time) then
                              row_number() over (order by station,begin_time)
                         when row_number() over (order by station,begin_time) = 1 then 1
                    else
                              null
                    end rn
               from
                    dt_test_loop
WHERE
     rn = 1Which gives:
  STATION BEGIN END_T  REL_CONC NUM_READINGS
        1 12:00 12:02         0            3
        1 12:03 12:03       3.3            1
        1 12:04 12:05         0            3
        2 12:00 12:02         0            3
        2 12:03 12:03       4.2            1As you can see, it's slightly wrong in that the readings for station 1 have all been grouped together from 12:04->12:05. The reason for this is the
over (order by station,begin_time)
part. What you really need is another column (which hopefully you have) that records the sequence of readings i.e. an insert timestamp or sequence number. If you can use that instead of station adn begin time, you are rocking! :-)...
create table dt_test_loop (station number, begin_time date, rel_conc number(3,2), ins_seq number)
insert into dt_test_loop values(1, to_date('12:00','hh24:mi'), 0 ,1);
insert into dt_test_loop values(1, to_date('12:01','hh24:mi'), 0,2);
insert into dt_test_loop values(1, to_date('12:02','hh24:mi'), 0,3);
insert into dt_test_loop values(1, to_date('12:03','hh24:mi'), 3.3,4);
insert into dt_test_loop values(1, to_date('12:04','hh24:mi'), 0,5);
insert into dt_test_loop values(2, to_date('12:00','hh24:mi'), 0,6 );
insert into dt_test_loop values(2, to_date('12:01','hh24:mi'), 0,7);
insert into dt_test_loop values(2, to_date('12:02','hh24:mi'), 0,8);
insert into dt_test_loop values(2, to_date('12:03','hh24:mi'), 4.2,9);
insert into dt_test_loop values(1, to_date('12:04','hh24:mi'), 0,10);
insert into dt_test_loop values(1, to_date('12:05','hh24:mi'), 0,11);
select
     station,
     begin_time,
     end_time,
     rel_conc,
     num_readings
FROM
     SELECT
          station,
          min(begin_time) over(partition by max_rn order by max_rn) begin_time,
          max(begin_time) over(partition by max_rn order by max_rn) end_time,
          rel_conc,
          count(*) over(partition by max_rn order by max_rn) num_readings,
          row_number() over(partition by max_rn order by max_rn) rn,
          max_rn
     FROM
          SELECT
               station,
               rel_conc,
               begin_time,
               max(rn) over(order by ins_seq ) max_rn
          FROM
               (select
                    rel_conc,
                    station,
                    begin_time,
                    ins_seq,
                    case
                         when      rel_conc <> lag(rel_conc) over (order by ins_seq ) OR
                              station <> lag(station ) over (order by ins_seq ) then
                              row_number() over (order by ins_seq )
                         when row_number() over (order by ins_seq ) = 1 then 1
                    else
                              null
                    end rn
               from
                    dt_test_loop
WHERE
     rn = 1Which gives:
  STATION BEGIN END_T  REL_CONC NUM_READINGS
        1 12:00 12:02         0            3
        1 12:03 12:03       3.3            1
        1 12:04 12:04         0            1
        2 12:00 12:02         0            3
        2 12:03 12:03       4.2            1
        1 12:04 12:05         0            2Also, I'm sure it can be simplified a bit more, but that's what I got....:-)
HTH
David

Similar Messages

  • Why such horrible performance and unwillingness for Verizon to help?

    I have fios and Internet, TV, five cell phones and terrible performance. I can barely connect from one room to another using wireless. Occasionally it works often substandard strength. When I come home from people's houses the connections are great, even when the connection is from the neighbor next door! I have complained several times. I have BEGGED to have a modern router put in. The router is the original antiquated machine the originally put in. I think it runs on vacuum tubes. Tech support will not swap it out. Meanwhile my neighbor across the street mentions a little problem and, boom, they get a new router. It is still an 802.11g. They want you to pay for an n level router. I pay $600 dollars a MONTH for crumby service with no willingness to help out.
    Can anyone tell me if cable vision has better service? I just got back from my cousin and one of their service guys came over, spent two hours checking things out and went out of his way to verify the service was up to or better than standard. I was very impressed. The verizon guys seem not the sleigh east bit interested in helping but they di want to prove they were smarter than anybody else.
    Any advice or recommendations would be very welcome. Please help. I give up on these corporate thieves. Please don' hesitate to advise. Thanks.
    Bill
    Email info removed as required by the Terms of Service.
    Message was edited by: Admin Moderator

    lagagnon wrote:
    MrKsoft wrote:I've run a very usable Ubuntu/GNOME/Compiz based system on my P2/450, 320MB RAM, with a Radeon 7500 before, and that's even older hardware, with bulkier software on top of it.
    I'm sorry but I find that very hard to believe. I work with older computers all the time - I volunteer with a charity that gets donated computers and we install either Puppy or Ubuntu on them, depending on vintage. On a P450 with only 320MB almost any machine of that vintage will run like a dog with Ubuntu and Compiz would be a no go. It would be using the swap partition all the time and the graphics will be pretty slow going.
    Hey, believe it: http://www.youtube.com/watch?v=vXwGMf141VQ
    Of course, this was three years ago.  Probably wouldn't go so well now.
    To start helping you diagnose your problems please reboot your computer and before you start loading software show us the output to "ps aux", "free", "df -h", "lspci" and "lsmod" so we can check your system basics. You could paste all those over to pastebin.ca if you wish.
    Here's everything over at pastebin: http://pastebin.ca/2005110

  • Thread-safe and performant way to return rows and then delete them

    Hi all
    I have a table containing rows to be processed by Java. These rows need to be returned to Java, then they willl be processed and sent to a JMS queue, then if that JMS operation is successful they need to be deleted from the Oracle table..
    The current method is:
    Java calls Oracle SP with 'numrows' parameter.
    Oracle SP updates that number of rows in the table with a batch ID from a sequence, and commits.
    Oracle SP returns the Batch_ID to Java.
    Java then selects * from table where batch_id = XXXX;
    Java sends messages to JMS. If JMS transaction is OK, Java deletes from table where batch_Id =xxxx and commits;
    Clearly this isn't very efficient. What I would like to do is this:
    Java calls Oracle SP with 'numrows' parameter
    Oracle SP returns that many rows in a cursor and deletes them from the table simultaneously. Oracle SP does not open a new transaction - transaction is controlled from JAva.
    Java writes to JMS. If JMS is OK, Java commits its DB transaction and thus the rows are deleted.
    Therefore there's only a single DML operation - a DELETE.
    The trouble is, this is not threadsafe - if I have two Java threads calling the Oracle SP, then thread #2 may return rows that thread #1 already got - because thread#1 has not yet committed its delete, and thread#2 can select those rows. Thread#2 will then lock waiting to delete them until thread#1 has finished its delete, then thread#2 will get "0 rows deleted". But Java will stlil have been sent those rows.
    How can I engineer this method to be as efficient as possible while still being threadsafe? The key problem I'm having is that the DELETE operation doesn't prevent the rows being SELECTed by other threads - if there was a way to DELETE without committing but also immediately make those rows unavailable to other threads, that would work I think.
    Any help much appreciated
    Tom

    Hi Tom,
    You forgot to "mention" your version.
    I'm not sure, but I believe [SKIP LOCKED|http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_10002.htm#SQLRF01702] is safe to use. At least in 11.1. (I have used in both 9i and 10g where it was unsupported/undocumented)
    There is of course always the boring way: A single thread.
    Regards
    Peter

  • Loading millions of rows using SQL*loader to a table with constraints

    I have a table with constraints and I need to load millions of rows in it using SQL*Loader.
    What is the best way to do this, means what SQL*Loader options to use, for getting the best loading performance and how to deal with constraints?
    Regards

    - check if your table has check constraints (like column not null)
    if you trust the data in the file you have to load you can disable this constrainst and after the loader enable this constrainst.
    - Check if you can modify the table and place it in nologging mode (generate less redo but ONLY is SOME Conditions)
    Hope it helps
    Rui Madaleno

  • Help on table update - millions of rows

    Hi,
    I am trying to do the following however the process is taking lot of time. Can someone help me the best way to do it.
    qtemp1 - 500,000 rows
    qtemp2 - 50 Million rows
    UPDATE qtemp2 qt
    SET product =
    SELECT product_cd
    FROM qtemp1 qtemp
    WHERE qt.quote_id = qtemp.quote_id
    processed_ind = 'P'
    I have created indexes on product, product_cd and quote_id on both the tables.
    Thank you

    There are two basic I/O read operations that need to be done to find the required rows.
    1. In QTEMP1 find row for a specific QUOTE_ID.
    2. In QTEMP2 find all rows where PROCESSED_IND is equal to 'P'.
    For every row in (2), the I/O in (1) is executed. So, if there are 10 million rows result for (2), then (1) will be executed 10 million times.
    So you want QTEMP1 to be optimised for access via QUOTE_ID - at best it should be using a unique index.
    Access on QTEMP2 is more complex. I assume that the process indicator is a column with low cardinality. In addition, being an process status indicator it is likely a candidate for being changed via UPDATE statements - in which case it is a very poor candidate for either a B+ tree index or a Bitmap index.
    Even if indexed, a large number of rows may be of process type 'P' - in which case the CBO will rightly decide not to waste I/O on the index, but instead spend all the I/O instead on a faster full table scan.
    In this case, (2) will result in all 50 million rows to be read - and for each row that has a 'P' process indicator, calling (1).
    Anyway you look at this.. it is a major processing request for the database to perform. It involves a lot of I/O, can involve a huge number of nested SQL calls to QTEMP1... so this is obviously going to be a slow performing process. The majority of elapsed processing time will be spend waiting for I/O from disks.

  • Enhance a SQL query with update of millions of rows

    Hi ,
    I have this query developed to updated around 200 million of rows on my production , I did my best but please need your recommendations\concerns to make it more enhanced
    DECLARE @ORIGINAL_ID AS BIGINT
    SELECT FID001 INTO #Temp001_
    FROM INBA004 WHERE RS_DATE>='1999-01-01'
    AND RS_DATE<'2014-01-01' AND CLR_f1st='SSLM'
    and FID001 >=12345671
    WHILE (SELECT COUNT(*) FROM #Temp001_ ) <>0
    BEGIN
    SELECT TOP 1 @ORIGINAL_ID=FID001 FROM #Temp001_ ORDER BY FID001
    PRINT CAST (@ORIGINAL_ID AS VARCHAR(100))+' STARTED'
    SELECT DISTINCT FID001
    INTO #OUT_FID001
    FROM OUTTR009 WHERE TRANSACTION_ID IN (SELECT TRANSACTION_ID FROM
    INTR00100 WHERE FID001 = @ORIGINAL_ID)
    UPDATE A SET RCV_Date=B.TIME_STAMP
    FROM OUTTR009 A INNER JOIN INTR00100 B
    ON A.TRANSACTION_ID=B.TRANSACTION_ID
    WHERE A.FID001 IN (SELECT FID001 FROM #OUT_FID001)
    AND B.FID001=@ORIGINAL_ID
    UPDATE A SET Sending_Date=B.TIME_STAMP
    FROM INTR00100 A INNER JOIN OUTTR009 B
    ON A.TRANSACTION_ID=B.TRANSACTION_ID
    WHERE A.FID001=@ORIGINAL_ID
    AND B.FID001 IN (SELECT FID001 FROM #OUT_FID001)
    DELETE FROM #Temp001_ WHERE FID001=@ORIGINAL_ID
    DROP TABLE #OUT_FID001
    PRINT CAST (@ORIGINAL_ID AS VARCHAR(100))+' FINISHED'
    END

    DECLARE @x INT
    SET @x = 1
    WHILE @x < 44,000,000  -- Set appropriately
    BEGIN
        UPDATE Table SET a = c+d where ID BETWEEN @x AND @x + 10000
        SET @x = @x + 10000
    END
    Make sure that ID column has a CI on.
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • Horrible performance with JDK 1.3.1!

    There is something very weird going on with Solaris (SPARC) JDK 1.3.1
    Hotspot Server jvm. We have an existing application which gets a large
    resultset back from the database, iterates over it and populates two int[]
    arrays.
    The operation iterates thru a 200,000 row resultset, which has three
    interger columns.
    Here are the performance numbers for various jdk (hotspot 2.0) combinations:
    Using Solaris JDK1.2.2_05a with the jit: 3 seconds
    Using Solaris JDK1.3.0 -server: <3 seconds
    Using Windows JDK1.3.1 -server: <3 seconds
    Using Solaris JDK1.3.1 -client: 7 seconds
    Using Solaris JDK1.3.1 -server: 3 MINUTES!
    As you can see the solaris 1.3.1 -server is having horrible performance, 60X
    worse than jdk1.3.0 -server and 1.2.2_05a with jit. I thought it was a
    problem with 1.3.1 on solaris, so I tried the 1.3.1 -client and while the
    performance was much better than 3 minutes, it still was slower than (which
    I expected since -client is meant for client side apps). I have no idea why
    this is happening. Below are the details of the problem.
    Oracle 8.1.7 on solaris 2.7
    Solaris 2.6
    Oracle Thin JDBC driver for 8.1.7 (classes12.zip)
    Code:
    String strQuery = "select entity_id, entity_child, period_id" +
    " FROM entity_map" +
    " WHERE entity_id >=0" +
    " AND period_id in (" + sEmptyPeriodIds + ")" +
    " ORDER BY period_id, entity_id ";
    boolean bAddedChildren = false;
    int intEntityId;
    int intChildId;
    int intPeriodId;
    //objDatabse just wraps creation and execution of resultset
    rs = objDatabase.executeSQLAndReturnResultSet( strQuery );
    //Timing start here
    while ( rs.next() )
    intEntityId = rs.getInt( 1 );
    intChildId = rs.getInt( 2 );
    intPeriodId = rs.getInt( 3 );
    // embo.addEntityMap( intPeriodId, intEntityId, intChildId );
    bAddedChildren = true;
    //Timing ends here
    If anyone has had similiar problems, I'd love to hear about it.
    Something is really really wrong with how 1.3.1 -server is optimizing the
    oracle jdbc code. Problem is that this is a black box, with no source
    available. Doesn't oracle test new versions of sun jvm's when they come
    out??
    Thanks,
    Darren
    null

    Darren,
    Good luck on trying to get any support for JDK 1.3.x with the ORACLE drivers. ORACLE doesn't support JDK 1.3.x yet. We've had other problems with the ORACLE 8.1.7 drivers. Have you tried running the same bench marks using the 8.1.6 or 8.1.6.2 drivers? I would be interested to find out if the performance problems are driver related or just JVM.
    -Peter
    See http://technet.oracle.com:89/ubb/Forum8/HTML/003853.html
    null

  • How to write a cursor to check every row of a table which has millions of rows

    Hello every one.
    I need help. please... Below is the script (sample data), You can run directly on sql server management studio.
    Here we need to update PPTA_Status column in Donation table. There WILL BE 3 statuses, A1, A2 and Q.
    Here we need to update PPTA_status of January month donations only. We need to write a cursor. Here as this is a sample data we have only some donations (rows), but in the real table we have millions of rows. Need to check every row.
    If i run the cursor for January, cursor should take every row, row by row all the rows of January.
    we have donations in don_sample table, i need to check the test_results in the result_sample table for that donations and needs to update PPTA_status COLUMN.
    We need to check all the donations of January month one by one. For every donation, we need to check for the 2 previous donations. For the previous donations, we need to in the following way. check
    If we want to find previous donations of a donation, first look for the donor of that donation, then we can find previous donations of that donor. Like this we need to check for 2 previous donations.
    If there are 2 previous donations and if they have test results, we need to update PPTA_STATUS column of this donatioh as 'Q'.
    If 2 previous donation_numbers  has  test_code column in result_sample table as (9,10,11) values, then it means those donations has result.
    BWX72 donor in the sample data I gave is example of above scenario
    For the donation we are checking, if it has only 1 previous donation and it has a result in result_sample table, then set this donation Status as A2, after checking the result of this donation also.
    ZBW24 donor in the sample data I gave is example of above scenario
    For the donation we are checking, if it has only 1 previous donation and it DO NOT have a result in result_sample table, then set this donation Status as A1. after checking the result of this donation also.
    PGH56 donor in the sample data I gave is example of above scenario
    like this we need to check all the donations in don_sample table, it has millions of rows per every month.
    we need to join don_sample and result_sample by donation_number. And we need to check for test_code column for result.
    -- creating table
    CREATE TABLE [dbo].[DON_SAMPLE](
    [donation_number] [varchar](15) NOT NULL,
    [donation_date] [datetime] NULL,
    [donor_number] [varchar](12) NULL,
    [ppta_status] [varchar](5) NULL,
    [first_time_donation] [bit] NULL,
    [days_since_last_donation] [int] NULL
    ) ON [PRIMARY]
    --inserting values
    Insert into [dbo].[DON_SAMPLE] ([donation_number],[donation_date],[donor_number],[ppta_status],[first_time_donation],[days_since_last_donation])
    Select '27567167','2013-12-11 00:00:00.000','BWX72','A',1,0
    Union ALL
    Select '36543897','2014-12-26 00:00:00.000','BWX72','A',0,32
    Union ALL
    Select '47536542','2014-01-07 00:00:00.000','BWX72','A',0,120
    Union ALL
    Select '54312654','2014-12-09 00:00:00.000','JPZ41','A',1,0
    Union ALL
    Select '73276321','2014-12-17 00:00:00.000','JPZ41','A',0,64
    Union ALL
    Select '83642176','2014-01-15 00:00:00.000','JPZ41','A',0,45
    Union ALL
    Select '94527541','2014-12-11 00:00:00.000','ZBW24','A',0,120
    Union ALL
    Select '63497874','2014-01-13 00:00:00.000','ZBW24','A',1,0
    Union ALL
    Select '95786348','2014-12-17 00:00:00.000','PGH56','A',1,0
    Union ALL
    Select '87234156','2014-01-27 00:00:00.000','PGH56','A',1,0
    --- creating table
    CREATE TABLE [dbo].[RESULT_SAMPLE](
    [test_result_id] [int] IDENTITY(1,1) NOT NULL,
    [donation_number] [varchar](15) NOT NULL,
    [donation_date] [datetime] NULL,
    [test_code] [varchar](5) NULL,
    [test_result_date] [datetime] NULL,
    [test_result] [varchar](50) NULL,
    [donor_number] [varchar](12) NULL
    ) ON [PRIMARY]
    ---SET IDENTITY_INSERT dbo.[RESULT_SAMPLE] ON
    ---- inserting values
    Insert into [dbo].RESULT_SAMPLE( [test_result_id], [donation_number], [donation_date], [test_code], [test_result_date], [test_result], [donor_number])
    Select 278453,'27567167','2013-12-11 00:00:00.000','0009','2014-01-20 00:00:00.000','N','BWX72'
    Union ALL
    Select 278454,'27567167','2013-12-11 00:00:00.000','0010','2014-01-20 00:00:00.000','NEG','BWX72'
    Union ALL
    Select 278455,'27567167','2013-12-11 00:00:00.000','0011','2014-01-20 00:00:00.000','N','BWX72'
    Union ALL
    Select 387653,'36543897','2014-12-26 00:00:00.000','0009','2014-01-24 00:00:00.000','N','BWX72'
    Union ALL
    Select 387654,'36543897','2014-12-26 00:00:00.000','0081','2014-01-24 00:00:00.000','NEG','BWX72'
    Union ALL
    Select 387655,'36543897','2014-12-26 00:00:00.000','0082','2014-01-24 00:00:00.000','N','BWX72'
    UNION ALL
    Select 378245,'73276321','2014-12-17 00:00:00.000','0009','2014-01-30 00:00:00.000','N','JPZ41'
    Union ALL
    Select 378246,'73276321','2014-12-17 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 378247,'73276321','2014-12-17 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','JPZ41'
    UNION ALL
    Select 561234,'83642176','2014-01-15 00:00:00.000','0081','2014-01-19 00:00:00.000','N','JPZ41'
    Union ALL
    Select 561235,'83642176','2014-01-15 00:00:00.000','0082','2014-01-19 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 561236,'83642176','2014-01-15 00:00:00.000','0083','2014-01-19 00:00:00.000','NEG','JPZ41'
    Union ALL
    Select 457834,'94527541','2014-12-11 00:00:00.000','0009','2014-01-30 00:00:00.000','N','ZBW24'
    Union ALL
    Select 457835,'94527541','2014-12-11 00:00:00.000','0010','2014-01-30 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 457836,'94527541','2014-12-11 00:00:00.000','0011','2014-01-30 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 587345,'63497874','2014-01-13 00:00:00.000','0009','2014-01-29 00:00:00.000','N','ZBW24'
    Union ALL
    Select 587346,'63497874','2014-01-13 00:00:00.000','0010','2014-01-29 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 587347,'63497874','2014-01-13 00:00:00.000','0011','2014-01-29 00:00:00.000','NEG','ZBW24'
    Union ALL
    Select 524876,'87234156','2014-01-27 00:00:00.000','0081','2014-02-03 00:00:00.000','N','PGH56'
    Union ALL
    Select 524877,'87234156','2014-01-27 00:00:00.000','0082','2014-02-03 00:00:00.000','N','PGH56'
    Union ALL
    Select 524878,'87234156','2014-01-27 00:00:00.000','0083','2014-02-03 00:00:00.000','N','PGH56'
    select * from DON_SAMPLE
    order by donor_number
    select * from RESULT_SAMPLE
    order by donor_number

    You didn't mention the version of SQL Server.  It's important, because SQL Server 2012 makes the job much easier (and will also run much faster, by dodging a self join).  (As Kalman said, the OVER clause contributes to this answer).  
    Both approaches below avoid needing the cursor at all.  (There was part of your explanation I didn't understand fully, but I think these suggestions work regardless)
    Here's a SQL 2012 answer, using LAG() to lookup the previous 1 and 2 donation codes by Donor:  (EDIT: I overlooked a couple things in this post: please refer to my follow-up post for the final/fixed answer.  I'm leaving this post with my overlooked
    items, for posterity).
    With Results_Interim as
    Select *
    , count('x') over(partition by donor_number) as Ct_Donations
    , Lag(test_code, 1) over(partition by donor_number order by donation_date ) as PrevDon1
    , Lag(test_code, 2) over(partition by donor_number order by donation_date ) as PrevDon2
    from RESULT_SAMPLE
    Select *
    , case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
    when PrevDon1 in (9, 10, 11) then 'A2'
    when PrevDon1 is not null then 'A1'
    End as NEWSTATUS
    from Results_Interim
    Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
    Order by Donor_Number, donation_date
    And a SQL 2005 or greater version, not using SQL 2012 new features
    With Results_Temp as
    Select *
    , count('x') over(partition by donor_number) as Ct_Donations
    , Row_Number() over(partition by donor_number order by donation_date ) as RN_Donor
    from RESULT_SAMPLE
    , Results_Interim as
    Select R1.*, P1.test_code as PrevDon1, P2.Test_Code as PrevDon2
    From Results_Temp R1
    left join Results_Temp P1 on P1.Donor_Number = R1.Donor_Number and P1.Rn_Donor = R1.RN_Donor - 1
    left join Results_Temp P2 on P2.Donor_Number = R1.Donor_Number and P2.Rn_Donor = R1.RN_Donor - 2
    Select *
    , case when PrevDon1 in (9, 10, 11) and PrevDon2 in (9, 10, 11) then 'Q'
    when PrevDon1 in (9, 10, 11) then 'A2'
    when PrevDon1 is not null then 'A1'
    End as NEWSTATUS
    from Results_Interim
    Where Test_result_Date >= '2014-01' and Test_result_Date < '2014-02'
    Order by Donor_Number, donation_date

  • HP Color LaserJet Pro MFP M177fw makes horrible noise and will not move past initializi​ng

    Makes horrible noise and will not move past initializing.

    Hi @HealthITLadyRed,
    I read your post and see that the printer is making noise and is stuck initializing. I would really like to be able to help you resolve this issue.
    I have provided some steps to try to see if we can resolve this issue.
    Check and remove any packaging material inside the printer and toner.
    Disconnect the USB/Network/FAX and power the printer on Standalone.
    Do a hard reset to see if that will resolve the issue.
    Leave the printer on and unplug the power cable from the printer and wall outlet for 60 seconds.
    Then reconnect the power cable to the printer and wall outlet rather than a surge protector.
    This ensures the printer is receiving full power and may help this situation.
    If the issue persists, perform a power drain by disconnecting the power cord from the back of the printer and keeping the power button pressed for 30 seconds and in case of a power switch keep the switch in ON position for 30 seconds before reconnecting the power chord.
    Update the printer's firmware by a USB connection. Software and Drivers.
    Select Option 2 and wait for the page to load and then select the link for firmware. This resolved the previous posters issue.
    I can send a private message with another step to try.
    In the forum beside your handle name just click on the envelope to view it.
    How is the printer connected? (USB/Ethernet/Wireless)
    What happened prior to this issue? (paper jam, changed toner) Any feedback would be appreciated.
    Is this a new printer?
    If you appreciate my efforts, please click the Thumbs up button below.
    If there is anything else I can help you with, just let me know. Thank You.
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos Thumbs Up" on the right to say “Thanks” for helping!
    Gemini02
    I work on behalf of HP

  • Report and data comming wrong after compress data with full optimization

    In SAP BPC 5.1 version to increase the sysetm performance we did full optimization with compress data.
    Theis process end with error, after login into system the report and values comming wrong,
    What is the wrong,how to rectify it
    Regards
    prakash J

    This issue is resolved,

  • QUERY PERFORMANCE AND DATA LOADING PERFORMANCE ISSUES

    WHAT ARE  QUERY PERFORMANCE ISSUES WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES...PLZ URGENT
    WHAT ARE DATALOADING PERFORMANCE ISSUES  WE NEED TO TAKE CARE PLEASE EXPLAIN AND LET ME KNOW T CODES PLZ URGENT
    WILL REWARD FULL POINT S
    REGARDS
    GURU

    BW Back end
    Some Tips -
    1)Identify long-running extraction processes on the source system. Extraction processes are performed by several extraction jobs running on the source system. The run-time of these jobs affects the performance. Use transaction code SM37 — Background Processing Job Management — to analyze the run-times of these jobs. If the run-time of data collection jobs lasts for several hours, schedule these jobs to run more frequently. This way, less data is written into update tables for each run and extraction performance increases.
    2)Identify high run-times for ABAP code, especially for user exits. The quality of any custom ABAP programs used in data extraction affects the extraction performance. Use transaction code SE30 — ABAP/4 Run-time Analysis — and then run the analysis for the transaction code RSA3 — Extractor Checker. The system then records the activities of the extraction program so you can review them to identify time-consuming activities. Eliminate those long-running activities or substitute them with alternative program logic.
    3)Identify expensive SQL statements. If database run-time is high for extraction jobs, use transaction code ST05 — Performance Trace. On this screen, select ALEREMOTE user and then select SQL trace to record the SQL statements. Identify the time-consuming sections from the results. If the data-selection times are high on a particular SQL statement, index the DataSource tables to increase the performance of selection (see no. 6 below). While using ST05, make sure that no other extraction job is running with ALEREMOTE user.
    4)Balance loads by distributing processes onto different servers if possible. If your site uses more than one BW application server, distribute the extraction processes to different servers using transaction code SM59 — Maintain RFC Destination. Load balancing is possible only if the extraction program allows the option
    5)Set optimum parameters for data-packet size. Packet size affects the number of data requests to the database. Set the data-packet size to optimum values for an efficient data-extraction mechanism. To find the optimum value, start with a packet size in the range of 50,000 to 100,000 and gradually increase it. At some point, you will reach the threshold at which increasing packet size further does not provide any performance increase. To set the packet size, use transaction code SBIW — BW IMG Menu — on the source system. To set the data load parameters for flat-file uploads, use transaction code RSCUSTV6 in BW.
    6)Build indexes on DataSource tables based on selection criteria. Indexing DataSource tables improves the extraction performance, because it reduces the read times of those tables.
    7)Execute collection jobs in parallel. Like the Business Content extractors, generic extractors have a number of collection jobs to retrieve relevant data from DataSource tables. Scheduling these collection jobs to run in parallel reduces the total extraction time, and they can be scheduled via transaction code SM37 in the source system.
    8). Break up your data selections for InfoPackages and schedule the portions to run in parallel. This parallel upload mechanism sends different portions of the data to BW at the same time, and as a result the total upload time is reduced. You can schedule InfoPackages in the Administrator Workbench.
    You can upload data from a data target (InfoCube and ODS) to another data target within the BW system. While uploading, you can schedule more than one InfoPackage with different selection options in each one. For example, fiscal year or fiscal year period can be used as selection options. Avoid using parallel uploads for high volumes of data if hardware resources are constrained. Each InfoPacket uses one background process (if scheduled to run in the background) or dialog process (if scheduled to run online) of the application server, and too many processes could overwhelm a slow server.
    9). Building secondary indexes on the tables for the selection fields optimizes these tables for reading, reducing extraction time. If your selection fields are not key fields on the table, primary indexes are not much of a help when accessing data. In this case it is better to create secondary indexes with selection fields on the associated table using ABAP Dictionary to improve better selection performance.
    10)Analyze upload times to the PSA and identify long-running uploads. When you extract the data using PSA method, data is written into PSA tables in the BW system. If your data is on the order of tens of millions, consider partitioning these PSA tables for better performance, but pay attention to the partition sizes. Partitioning PSA tables improves data-load performance because it's faster to insert data into smaller database tables. Partitioning also provides increased performance for maintenance of PSA tables — for example, you can delete a portion of data faster. You can set the size of each partition in the PSA parameters screen, in transaction code SPRO or RSCUSTV6, so that BW creates a new partition automatically when a threshold value is reached.
    11)Debug any routines in the transfer and update rules and eliminate single selects from the routines. Using single selects in custom ABAP routines for selecting data from database tables reduces performance considerably. It is better to use buffers and array operations. When you use buffers or array operations, the system reads data from the database tables and stores it in the memory for manipulation, improving performance. If you do not use buffers or array operations, the whole reading process is performed on the database with many table accesses, and performance deteriorates. Also, extensive use of library transformations in the ABAP code reduces performance; since these transformations are not compiled in advance, they are carried out during run-time.
    12)Before uploading a high volume of transaction data into InfoCubes, activate the number-range buffer for dimension IDs. The number-range buffer is a parameter that identifies the number of sequential dimension IDs stored in the memory. If you increase the number range before high-volume data upload, you reduce the number of reads from the dimension tables and hence increase the upload performance. Do not forget to set the number-range values back to their original values after the upload. Use transaction code SNRO to maintain the number range buffer values for InfoCubes.
    13)Drop the indexes before uploading high-volume data into InfoCubes. Regenerate them after the upload. Indexes on InfoCubes are optimized for reading data from the InfoCubes. If the indexes exist during the upload, BW reads the indexes and tries to insert the records according to the indexes, resulting in poor upload performance. You can automate the dropping and regeneration of the indexes through InfoPackage scheduling. You can drop indexes in the Manage InfoCube screen in the Administrator Workbench.
    14)IDoc (intermediate document) archiving improves the extraction and loading performance and can be applied on both BW and R/3 systems. In addition to IDoc archiving, data archiving is available for InfoCubes and ODS objects.
    Hope it Helps
    Chetan
    @CP..

  • FAQ's, intros and memorable discussions in the Performance and Tuning Forum

    Welcome to the SDN ABAP Performance and Tuning Forum!
    In addition to release dependent information avalaible by:
    - pressing the F1 key on an ABAP statement,
    - or searching for them in transaction ABAPDOCU,
    - using the [SDN ABAP Development Forum Search|https://www.sdn.sap.com/irj/sdn/directforumsearch?threadid=&q=&objid=c42&daterange=all&numresults=15&rankby=10001],
    - the information accessible via the [SDN ABAP Main Wiki|https://wiki.sdn.sap.com/wiki/display/ABAP],
    - the [SAP Service Marketplace|http://service.sap.com] and see [SAP Note 192194|https://service.sap.com/sap/support/notes/192194] for search tips,
    - the 3 part [How to write guru ABAP code series|https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/f2dac69e-0e01-0010-e2b6-81c1e8e5ce50] ... (use the search to easily find the other 2 documents...)
    ... this "sticky post" lists some threads from the ABAP forums as:
    - An introduction for new members / visitors on topics discussed in threads,
    - An introduction to how the forums are used and the quality expected,
    - A collection of some threads which provided usefull answers to questions which are frequently asked, and,
    - A collection of some memorable threads if you feel like reading some ABAP related material.
    The listed threads will be enhanced from time to time. Please feel welcome to post to [this thread|Suggestions thread for ABAP FAQ sticky; to suggest any additional inclusions.
    Note: When asking a question in the forum, please also provide sufficient information such that the question can be answered usefully, do not repeat interview-type questions, and once closed please indicate which solution was usefull - to help others who search for it.

    ABAP Performance and Tuning
    Read Performance   => Gurus take over the discussion from Guests caught cheating the points-system.
    SELECT INTO TABLE => Initial questions often result in interesting follow-up discussions.
    Inner Joins vs For all Entries. => Including infos about system parameters for performance optimization.
    Inner Join Vs Database view Vs for all entries => Usefull literature recommended by performance guru YukonKid.
    Inner Joins vs For All Entries - performance query => Performance legends unplugged... read the blogs as well.
    The ABAP Runtime Trace (SE30) - Quick and Easy => New tricks in old tools. See other blogs by the same author as well.
    Skip scan used instead of (better?) range scan => Insider information on how index access works.
    DELETE WHERE sample case that i would like to share with you => Experts discussing the deletion of data from internal tables.
    Impact of Order of fields in Secondary  index => Also discussing order of fields in WHERE-clause
    "SELECT SINGLE" vs. "SELECT UP TO 1 ROWS" => Better for performance or semantics?
    into corresponding fields of table VERSUS into table => detailed discussion incl. runtime measurements
    Indexes making program run slower... => Everything you ever wanted to know about Oracle indexes.
    New! Mass reading standard texts (STXH, STXL) => avoiding single calls to READ_TEXT for time-critical processes
    New! Next Generation ABAP Runtime Analysis (SAT) => detailed introduction to the successor of SE30
    New! Points to note when using FOR ALL ENTRIES => detailed blog on the pitfall(s) a developer might face when using FAE
    New! Performance: What is the best way to check if a record exist on a table ? => Hermann's tips on checking existence of a record in a table
    Message was edited by: Oxana Noa Zubarev

  • Can please tell me how to implement expand and collapse table row data?

    i am trying implement expand and collapse table row data but i do not get any ideas..can please any one help me its an urgent requirement

    Yes, we can.   
    I think the best place for you to start for this is the NI Developer Zone.  I recommend beginning with these tutorials I found by searching on "data log rio".  There were more than just these few that might be relevant to your project but I'll leave that for you to decide.
    NI Compact RIO Setup and Services ->  http://zone.ni.com/devzone/cda/tut/p/id/11394
    Getting Started with CompactRIO - Logging Data to Disk  ->  http://zone.ni.com/devzone/cda/tut/p/id/11198
    Getting Started with CompactRIO - Performing Basic Control ->  http://zone.ni.com/devzone/cda/tut/p/id/11197
    These will probably give you links to more topics/tutorials/examples that can help you design and implement your target system.
    Jason
    Wire Warrior
    Behold the power of LabVIEW as my army of Roomba minions streaks across the floor!

  • How do i get the table_names and no of rows in a session

    how do i write a select statement to
    retrieve the all the table_names and number of rows in that table in a session.
    for example i should get the output as below
    tab_name no.of rows
    tab1 40
    tab2 50
    tab3 25
    thank u for the help

    Why? You do realize that this will force Oracle to do a full table-scan on every table in the schema, right? This will be horribly slow...
    The PL/SQL approach would be something like this... You could also write a pipelined function to do this if you want to be able to do this in a SQL statement.
    DECLARE
      sqlStmt VARCHAR2(4000);
      cnt     NUMBER;
    BEGIN
      FOR x IN (SELECT * FROM user_tables)
      LOOP
        sqlStmt := 'SELECT COUNT(*) FROM ' || x.table_name || ';';
        EXECUTE IMMEDIATE sqlStmt INTO cnt;
        dbms_output.put_line( x.table_name || ': ' || to_char(cnt) );
      END LOOP;
    END;Justin
    Distributed Database Consulting, Inc.
    http://www.ddbcinc.com/askDDBC

  • Finding and averaging multiple rows

    Hello,
    I am having a little problem and was wondering if anyone had any ideas on how to best solve it. 
    Here is the problem: 
    - I have a large file 6000 rows by 2500 columns. 
    - First I sort the file by columns 1 and 2
    - then I find that various rows in these two columns (1 and 2) have duplicate values, sometimes only twice, but sometimes three or four, or five or up to 9 times.
    - this duplication occurs in only the first two columns, but we don't know in which rows and we don't know how much duplication there is. The remaining columns, i.e. column 3 to column 2500, for the corresponding rows contain data.
    - Programatically, I would like to find the duplicated rows by searching columns 1 and 2 and when I find them, average the respective data for these rows in columns 3 to 2500.
    - So, once this is done I want to save the averaged data to file. In each row this file should have the name of colunm 1 and 2 and the averaged row values for columns 3 to 2500. So the file will have n rows by 2500 columns, where n will depend on how many duplicated rows there are in the original file.
    I hope that this makes sense. I have outlined the problem in a simple example below:
    In the example below we have two duplicates in rows 1 and 2 and four duplicates in rows 5 to 8.
    Example input file: 
    Col1 Col2 Col3 ... Col2500
    3 4 0.2 ... 0.5
    3 4 0.4 ... 0.8
    8 5 0.1 ... 0.4
    7 9 0.7 ... 0.9
    2 8 0.1 ... 0.5 
    2 8 0.5 ... 0.8
    2 8 0.3 ... 0.2
    2 8  0.6 ... 0.7
    6 9 0.9 ... 0.1
    So, based on the above example, the first two rows need averaging (two duplicates) as do rows 5 to 8 (four duplicates). The output file should look like this:
    Col1 Col2 Col3 ... Col2500
    3 4 0.3 ... 0.65
    8 5 0.1 ... 0.4
    7 9 0.7 ... 0.9
    2 8 0.375 ... 0.55
    6 9 0.9 ... 0.1
    Solved!
    Go to Solution.

    Well, here's an initial crack at it. The premise behind this
    solution is to not even bother with the sorting. Also, trying to read
    the whole file at once just leads to memory problems. The approach
    taken is to read the file in chunks (as lines) and then for each line
    create a lookup key to see if that particular line has the same first
    and second columns as one that we've previously encountered. A shift
    register is used to keep track of the unique "keys".
    This
    is only an initial attempt and has known issues. Since a Build Array is
    used to create the resulting output array the loop will slow down over
    time, though it may slow down, speed up, and slow down as LabVIEW
    performs internal memory management to allocate more memory for the
    resultant array. On the large 6000 x 2500 array it took several minutes on my computer. I did this on LabVIEW 8.2, and I know that LabVIEW 8.6
    has better memory management, so the performance will likely be
    different. 
    Attachments:
    Averaging rows.vi ‏30 KB

Maybe you are looking for