Sum in sql

I have two tables in oracle 10g
id............code..........name
1..............10...........local1
2..............20...........local2
3..............30...........local3
id.............local..............created
1...............1..............12.11.2010
2...............2..............16.11.2010
3...............3..............12.11.2010
4...............6..............08.10.2010
select s.name,s.code,
TO_CHAR(p.created,'MM') month,
count(p.id)
from local s,made p
where p.local = s.id
GROUP BY s.name,s.code,TO_CHAR(p.created,'MM')
ORDER BY s.code
After I get my query:
how much each month something local made
How do I get for each local total of all months.
local1..............10..............1..................57
......................................2..................63
......................................3..................23
sum...................................................XXX
local2..............20..............1.................34
......................................2.................72
......................................3.................68
sum..................................................XXX
and so on.
thanks

BREAK ON name
COMPUTE SUM OF amount ON name
select s.name,s.code,
TO_CHAR(p.created,'MM') month,
count(p.id) amount
from local s,made p
where p.local = s.id
GROUP BY s.name,s.code,TO_CHAR(p.created,'MM')
ORDER BY s.code;

Similar Messages

  • Question about SUM in SQL query

    I have a SQL ststement which is listed below along with it's output.
    For the NUM_REQ column, I am expecting to get the number 832 because the pm_requirements_table contains 16 records and they all have a frequency of WEEKLY.
    16 * 52 = 832. The query returns 12480 which I can see that it is multiplying the 832 by 15 which is the number of entries in the lpm table 16 * 52 * 15 = 12480.
    I need the lpm table in there for other reasons, so my question is how can I get it to return 832 and still have the lpm table as part of the query.
    Thanks,
    George
    SQL
    SELECT 'NAS WIDE' as target
    , 1 as years
    , count(distinct lpm.fac_ident) as facilities
    , SUM(CASE  upper(req.frequency)
      WHEN 'DAILY' THEN 365
      WHEN 'WEEKLY' THEN 52
      WHEN 'MONTHLY' THEN 12
      WHEN 'QUARTERLY' THEN 4
      WHEN 'SEMIANNUALLY' THEN 2
      WHEN 'ANNUALLY' THEN 1
      ELSE 0
    END) as num_req
    FROM lpm, pm_requirements_table req
    group by 'NAS WIDE';OUTPUT
    "TARGET","YEARS","FACILITIES","NUM_REQ"
    "NAS WIDE",1,1,12480
    -- PM_REQUIREMENTS_TABLE
    "PUBLICATION_ORDER","PUBLICATION_PARAGRAPH_NUMBER","DESCRIPTION","FREQUENCY","CHECK_OR_MAINTENANCE","PRTANTAC_ID"
    "6310.19A","161A","Check transmitter average rf power output","WEEKLY","",2
    "6310.19A","161B","Check transmitter VSWR","WEEKLY","",3
    "6310.19A","161C","Check RMS transmitter pulse width","WEEKLY","",4
    "6310.19A","161D(1)","Check filament current","WEEKLY","",5
    "6310.19A","161D(2)","Check focus coil current","WEEKLY","",6
    "6310.19A","161D(3)","Check Klystron voltage","WEEKLY","",7
    "6310.19A","161D(4)","Check Klystron current","WEEKLY","",8
    "6310.19A","161D(5)","Check PFN voltage","WEEKLY","",9
    "6310.19A","161D(6)","Check vacuum pump current","WEEKLY","",10
    "6310.19A","161E","Check target receiver MDS","WEEKLY","",11
    "6310.19A","161F","Check target receiver NF","WEEKLY","",12
    "6310.19A","161G","Check target receiver recovery","WEEKLY","",13
    "6310.19A","161H","Check weather receiver MDS","WEEKLY","",14
    "6310.19A","161I","Check weather receiver NF","WEEKLY","",15
    "6310.19A","161J","Check weather receiver recovery","WEEKLY","",16
    "6310.19A","161K","Check spare modem operation","WEEKLY","",17
    -- LPM table
    "LOG_ID","FAC_IDENT","FAC_TYPE","CODE_CATEGORY","SUPPLEMENTAL_CODE","MAINT_ACTION_CODE","INTERRUPT_CONDITION","ATOW_CODE","SECTOR_CODE","LOG_STATUS","START_DATE","START_DATETIME","END_DATE","END_DATETIME","MODIFIED_DATETIME","WR_AREA","SHORT_NAME","EQUIPMENT_IDENT","INTERVAL_CODE","EARLIEST_DATE","EARLIEST_DATETIME","SCHEDULED_DATE","SCHEDULED_DATETIME","LATEST_DATE","LATEST_DATETIME","WR_CREW_UNIT","WR_WATCH","PUBLICATION_ORDER","PUBLICATION_ORDER_ORIGINAL","PUBLICATION_PARAGRAPH","PUBLICATION_PARAGRAPH_ORIGINAL","NUMBER_OF_TASKS","LOG_SUMMARY","COMMENTS","RELATED_LOGS","LPMANTAC_ID"
    108305902,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",07-MAY-10,"05/07/2010 3:24",07-MAY-10,"05/07/2010 3:28","05/07/2010 3:31","RADAR","SYS","SYSTEM","W","05/02/2010","05/02/2010 0:00",05-MAY-10,"05/05/2010 0:00",08-MAY-10,"05/08/2010 0:00","RADR","","6310.19A","6310.19A","161.K","161. K","1","","","",1
    108306002,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",02-MAY-10,"05/02/2010 21:00",02-MAY-10,"05/02/2010 21:30","05/03/2010 1:07","RADAR","SYS","CHAN B","W","05/02/2010","05/02/2010 0:00",05-MAY-10,"05/05/2010 0:00",08-MAY-10,"05/08/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",2
    108306102,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",02-MAY-10,"05/02/2010 21:00",02-MAY-10,"05/02/2010 21:30","05/03/2010 1:07","RADAR","SYS","CHAN A","W","05/02/2010","05/02/2010 0:00",05-MAY-10,"05/05/2010 0:00",08-MAY-10,"05/08/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",3
    104188702,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",29-APR-10,"4/29/2010 10:09",29-APR-10,"4/29/2010 10:11","4/29/2010 10:30","RADAR","SYS","SYSTEM","W","4/25/2010","4/25/2010 0:00",28-APR-10,"4/28/2010 0:00",01-MAY-10,"05/01/2010 0:00","RADR","","6310.19A","6310.19A","161.K","161. K","1","","","",4
    104188402,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",26-APR-10,"4/26/2010 13:33",26-APR-10,"4/26/2010 13:46","4/26/2010 15:23","RADAR","SYS","CHAN A","W","4/25/2010","4/25/2010 0:00",28-APR-10,"4/28/2010 0:00",01-MAY-10,"05/01/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",5
    104188502,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",26-APR-10,"4/26/2010 13:33",26-APR-10,"4/26/2010 13:46","4/26/2010 15:23","RADAR","SYS","CHAN B","W","4/25/2010","4/25/2010 0:00",28-APR-10,"4/28/2010 0:00",01-MAY-10,"05/01/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",6
    101223702,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",19-APR-10,"4/19/2010 1:30",19-APR-10,"4/19/2010 2:10","4/19/2010 3:12","RADAR","SYS","CHAN B","W","4/18/2010","4/18/2010 0:00",21-APR-10,"4/21/2010 0:00",24-APR-10,"4/24/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",7
    101223802,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",19-APR-10,"4/19/2010 1:30",19-APR-10,"4/19/2010 2:10","4/19/2010 3:12","RADAR","SYS","CHAN A","W","4/18/2010","4/18/2010 0:00",21-APR-10,"4/21/2010 0:00",24-APR-10,"4/24/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",8
    101223602,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",19-APR-10,"4/19/2010 1:00",19-APR-10,"4/19/2010 1:09","4/19/2010 3:12","RADAR","SYS","SYSTEM","W","4/18/2010","4/18/2010 0:00",21-APR-10,"4/21/2010 0:00",24-APR-10,"4/24/2010 0:00","RADR","","6310.19A","6310.19A","161.K","161. K","1","","","",9
    96642602,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",12-APR-10,"04/12/2010 10:25",12-APR-10,"04/12/2010 10:34","04/12/2010 17:49","RADAR","SYS","SYSTEM","W","04/11/2010","04/11/2010 0:00",14-APR-10,"4/14/2010 0:00",17-APR-10,"4/17/2010 0:00","RADR","","6310.19A","6310.19A","161.K","161. K","1","","","",10
    96642402,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",11-APR-10,"04/11/2010 11:10",11-APR-10,"04/11/2010 11:15","04/11/2010 12:51","RADAR","SYS","CHAN B","W","04/11/2010","04/11/2010 0:00",14-APR-10,"4/14/2010 0:00",17-APR-10,"4/17/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",11
    96642302,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",11-APR-10,"04/11/2010 11:05",11-APR-10,"04/11/2010 11:10","04/11/2010 12:51","RADAR","SYS","CHAN A","W","04/11/2010","04/11/2010 0:00",14-APR-10,"4/14/2010 0:00",17-APR-10,"4/17/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",12
    92805502,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",07-APR-10,"04/07/2010 18:10",07-APR-10,"04/07/2010 18:22","04/07/2010 19:04","RADAR","SYS","CHAN A","W","04/04/2010","04/04/2010 0:00",07-APR-10,"04/07/2010 0:00",10-APR-10,"04/10/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",13
    92805402,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",07-APR-10,"04/07/2010 17:53",07-APR-10,"04/07/2010 18:05","04/07/2010 19:04","RADAR","SYS","CHAN B","W","04/04/2010","04/04/2010 0:00",07-APR-10,"04/07/2010 0:00",10-APR-10,"04/10/2010 0:00","RADR","","6310.19A","6310.19A","161A-J","161 A-J","15","","","",14
    92805302,"ATL","ASR",50,"0","P","","WEQ1C","SO1LB","C",07-APR-10,"04/07/2010 9:55",07-APR-10,"04/07/2010 10:05","04/07/2010 10:29","RADAR","SYS","SYSTEM","W","04/04/2010","04/04/2010 0:00",07-APR-10,"04/07/2010 0:00",10-APR-10,"04/10/2010 0:00","RADR","","6310.19A","6310.19A","161.K","161. K","1","","","",15Edited by: George Heller on Jul 15, 2011 10:32 AM

    -- LPM
    CREATE TABLE "LPM"
        "LOG_ID"              NUMBER(22,0) NOT NULL ENABLE,
        "FAC_IDENT"           VARCHAR2(5),
        "FAC_TYPE"            VARCHAR2(5),
        "CODE_CATEGORY"       NUMBER(22,0) NOT NULL ENABLE,
        "SUPPLEMENTAL_CODE"   VARCHAR2(1),
        "MAINT_ACTION_CODE"   VARCHAR2(1),
        "INTERRUPT_CONDITION" VARCHAR2(22),
        "ATOW_CODE"           VARCHAR2(22),
        "SECTOR_CODE"         VARCHAR2(5),
        "LOG_STATUS"          VARCHAR2(3) NOT NULL ENABLE,
        "START_DATE" DATE,
        "START_DATETIME" VARCHAR2(22),
        "END_DATE" DATE,
        "END_DATETIME"      VARCHAR2(22),
        "MODIFIED_DATETIME" VARCHAR2(22),
        "WR_AREA"           VARCHAR2(6),
        "SHORT_NAME"        VARCHAR2(15),
        "EQUIPMENT_IDENT"   VARCHAR2(15),
        "INTERVAL_CODE"     VARCHAR2(255),
        "EARLIEST_DATE"     VARCHAR2(4000),
        "EARLIEST_DATETIME" VARCHAR2(255),
        "SCHEDULED_DATE" DATE,
        "SCHEDULED_DATETIME" VARCHAR2(22),
        "LATEST_DATE" DATE,
        "LATEST_DATETIME"                VARCHAR2(22),
        "WR_CREW_UNIT"                   VARCHAR2(10),
        "WR_WATCH"                       VARCHAR2(1),
        "PUBLICATION_ORDER"              VARCHAR2(30),
        "PUBLICATION_ORDER_ORIGINAL"     VARCHAR2(30),
        "PUBLICATION_PARAGRAPH"          VARCHAR2(30),
        "PUBLICATION_PARAGRAPH_ORIGINAL" VARCHAR2(30),
        "NUMBER_OF_TASKS"                VARCHAR2(25),
        "LOG_SUMMARY"                    VARCHAR2(255),
        "COMMENTS" CLOB,
        "RELATED_LOGS" CLOB,
        "LPMANTAC_ID" NUMBER,
        PRIMARY KEY ("LPMANTAC_ID") ENABLE
      CREATE UNIQUE INDEX "SYS_IL0000077142C00035$$" ON "LPM"
    -- LPM_PARAGRAPH_MAPPING
    CREATE TABLE "LPM_PARAGRAPH_MAPPING_TABLE"
        "PUBLICATION_ORDER"       VARCHAR2(30),
        "PUBLICATION_PARAGRAPH"   VARCHAR2(30),
        "PARAGRAPH_ALIAS_MAPPING" VARCHAR2(30),
        "LPMTANTAC_ID"            NUMBER,
        PRIMARY KEY ("LPMTANTAC_ID") ENABLE
      CREATE UNIQUE INDEX "SYS_C0011587" ON "LPM_PARAGRAPH_MAPPING_TABLE"
        "LPMTANTAC_ID"
      -- PM_REQUIREMENTS_TABLE
    CREATE TABLE "PM_REQUIREMENTS_TABLE"
        "PUBLICATION_ORDER"            VARCHAR2(30),
        "PUBLICATION_PARAGRAPH_NUMBER" VARCHAR2(30),
        "DESCRIPTION"                  VARCHAR2(4000),
        "FREQUENCY"                    VARCHAR2(30),
        "CHECK_OR_MAINTENANCE"         VARCHAR2(22),
        "PRTANTAC_ID"                  NUMBER,
        PRIMARY KEY ("PRTANTAC_ID") ENABLE
      CREATE UNIQUE INDEX "SYS_C0011588" ON "PM_REQUIREMENTS_TABLE"
        "PRTANTAC_ID"
    REM INSERTING into LPM
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (108305902,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('07-MAY-10','DD-MON-RR'),'05/07/2010 3:24',to_date('07-MAY-10','DD-MON-RR'),'05/07/2010 3:28','05/07/2010 3:31','RADAR','SYS','SYSTEM','W','05/02/2010','05/02/2010 0:00',to_date('05-MAY-10','DD-MON-RR'),'05/05/2010 0:00',to_date('08-MAY-10','DD-MON-RR'),'05/08/2010 0:00','RADR',null,'6310.19A','6310.19A','161.K','161. K','1',null,1);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (108306002,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('02-MAY-10','DD-MON-RR'),'05/02/2010 21:00',to_date('02-MAY-10','DD-MON-RR'),'05/02/2010 21:30','05/03/2010 1:07','RADAR','SYS','CHAN B','W','05/02/2010','05/02/2010 0:00',to_date('05-MAY-10','DD-MON-RR'),'05/05/2010 0:00',to_date('08-MAY-10','DD-MON-RR'),'05/08/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,2);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (108306102,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('02-MAY-10','DD-MON-RR'),'05/02/2010 21:00',to_date('02-MAY-10','DD-MON-RR'),'05/02/2010 21:30','05/03/2010 1:07','RADAR','SYS','CHAN A','W','05/02/2010','05/02/2010 0:00',to_date('05-MAY-10','DD-MON-RR'),'05/05/2010 0:00',to_date('08-MAY-10','DD-MON-RR'),'05/08/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,3);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (104188702,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('29-APR-10','DD-MON-RR'),'4/29/2010 10:09',to_date('29-APR-10','DD-MON-RR'),'4/29/2010 10:11','4/29/2010 10:30','RADAR','SYS','SYSTEM','W','4/25/2010','4/25/2010 0:00',to_date('28-APR-10','DD-MON-RR'),'4/28/2010 0:00',to_date('01-MAY-10','DD-MON-RR'),'05/01/2010 0:00','RADR',null,'6310.19A','6310.19A','161.K','161. K','1',null,4);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (104188402,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('26-APR-10','DD-MON-RR'),'4/26/2010 13:33',to_date('26-APR-10','DD-MON-RR'),'4/26/2010 13:46','4/26/2010 15:23','RADAR','SYS','CHAN A','W','4/25/2010','4/25/2010 0:00',to_date('28-APR-10','DD-MON-RR'),'4/28/2010 0:00',to_date('01-MAY-10','DD-MON-RR'),'05/01/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,5);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (104188502,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('26-APR-10','DD-MON-RR'),'4/26/2010 13:33',to_date('26-APR-10','DD-MON-RR'),'4/26/2010 13:46','4/26/2010 15:23','RADAR','SYS','CHAN B','W','4/25/2010','4/25/2010 0:00',to_date('28-APR-10','DD-MON-RR'),'4/28/2010 0:00',to_date('01-MAY-10','DD-MON-RR'),'05/01/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,6);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (101223702,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('19-APR-10','DD-MON-RR'),'4/19/2010 1:30',to_date('19-APR-10','DD-MON-RR'),'4/19/2010 2:10','4/19/2010 3:12','RADAR','SYS','CHAN B','W','4/18/2010','4/18/2010 0:00',to_date('21-APR-10','DD-MON-RR'),'4/21/2010 0:00',to_date('24-APR-10','DD-MON-RR'),'4/24/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,7);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (101223802,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('19-APR-10','DD-MON-RR'),'4/19/2010 1:30',to_date('19-APR-10','DD-MON-RR'),'4/19/2010 2:10','4/19/2010 3:12','RADAR','SYS','CHAN A','W','4/18/2010','4/18/2010 0:00',to_date('21-APR-10','DD-MON-RR'),'4/21/2010 0:00',to_date('24-APR-10','DD-MON-RR'),'4/24/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,8);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (101223602,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('19-APR-10','DD-MON-RR'),'4/19/2010 1:00',to_date('19-APR-10','DD-MON-RR'),'4/19/2010 1:09','4/19/2010 3:12','RADAR','SYS','SYSTEM','W','4/18/2010','4/18/2010 0:00',to_date('21-APR-10','DD-MON-RR'),'4/21/2010 0:00',to_date('24-APR-10','DD-MON-RR'),'4/24/2010 0:00','RADR',null,'6310.19A','6310.19A','161.K','161. K','1',null,9);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (96642602,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('12-APR-10','DD-MON-RR'),'04/12/2010 10:25',to_date('12-APR-10','DD-MON-RR'),'04/12/2010 10:34','04/12/2010 17:49','RADAR','SYS','SYSTEM','W','04/11/2010','04/11/2010 0:00',to_date('14-APR-10','DD-MON-RR'),'4/14/2010 0:00',to_date('17-APR-10','DD-MON-RR'),'4/17/2010 0:00','RADR',null,'6310.19A','6310.19A','161.K','161. K','1',null,10);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (96642402,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('11-APR-10','DD-MON-RR'),'04/11/2010 11:10',to_date('11-APR-10','DD-MON-RR'),'04/11/2010 11:15','04/11/2010 12:51','RADAR','SYS','CHAN B','W','04/11/2010','04/11/2010 0:00',to_date('14-APR-10','DD-MON-RR'),'4/14/2010 0:00',to_date('17-APR-10','DD-MON-RR'),'4/17/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,11);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (96642302,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('11-APR-10','DD-MON-RR'),'04/11/2010 11:05',to_date('11-APR-10','DD-MON-RR'),'04/11/2010 11:10','04/11/2010 12:51','RADAR','SYS','CHAN A','W','04/11/2010','04/11/2010 0:00',to_date('14-APR-10','DD-MON-RR'),'4/14/2010 0:00',to_date('17-APR-10','DD-MON-RR'),'4/17/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,12);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (92805502,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 18:10',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 18:22','04/07/2010 19:04','RADAR','SYS','CHAN A','W','04/04/2010','04/04/2010 0:00',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 0:00',to_date('10-APR-10','DD-MON-RR'),'04/10/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,13);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (92805402,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 17:53',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 18:05','04/07/2010 19:04','RADAR','SYS','CHAN B','W','04/04/2010','04/04/2010 0:00',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 0:00',to_date('10-APR-10','DD-MON-RR'),'04/10/2010 0:00','RADR',null,'6310.19A','6310.19A','161A-J','161 A-J','15',null,14);
    Insert into LPM (LOG_ID,FAC_IDENT,FAC_TYPE,CODE_CATEGORY,SUPPLEMENTAL_CODE,MAINT_ACTION_CODE,INTERRUPT_CONDITION,ATOW_CODE,SECTOR_CODE,LOG_STATUS,START_DATE,START_DATETIME,END_DATE,END_DATETIME,MODIFIED_DATETIME,WR_AREA,SHORT_NAME,EQUIPMENT_IDENT,INTERVAL_CODE,EARLIEST_DATE,EARLIEST_DATETIME,SCHEDULED_DATE,SCHEDULED_DATETIME,LATEST_DATE,LATEST_DATETIME,WR_CREW_UNIT,WR_WATCH,PUBLICATION_ORDER,PUBLICATION_ORDER_ORIGINAL,PUBLICATION_PARAGRAPH,PUBLICATION_PARAGRAPH_ORIGINAL,NUMBER_OF_TASKS,LOG_SUMMARY,LPMANTAC_ID) values (92805302,'ATL','ASR',50,'0','P',null,'WEQ1C','SO1LB','C',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 9:55',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 10:05','04/07/2010 10:29','RADAR','SYS','SYSTEM','W','04/04/2010','04/04/2010 0:00',to_date('07-APR-10','DD-MON-RR'),'04/07/2010 0:00',to_date('10-APR-10','DD-MON-RR'),'04/10/2010 0:00','RADR',null,'6310.19A','6310.19A','161.K','161. K','1',null,15);
    REM INSERTING into PM_REQUIREMENTS_TABLE
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161A','Check transmitter average rf power output','WEEKLY',null,2);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161B','Check transmitter VSWR','WEEKLY',null,3);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161C','Check RMS transmitter pulse width','WEEKLY',null,4);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161D(1)','Check filament current','WEEKLY',null,5);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161D(2)','Check focus coil current','WEEKLY',null,6);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161D(3)','Check Klystron voltage','WEEKLY',null,7);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161D(4)','Check Klystron current','WEEKLY',null,8);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161D(5)','Check PFN voltage','WEEKLY',null,9);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161D(6)','Check vacuum pump current','WEEKLY',null,10);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161E','Check target receiver MDS','WEEKLY',null,11);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161F','Check target receiver NF','WEEKLY',null,12);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161G','Check target receiver recovery','WEEKLY',null,13);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161H','Check weather receiver MDS','WEEKLY',null,14);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161I','Check weather receiver NF','WEEKLY',null,15);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161J','Check weather receiver recovery','WEEKLY',null,16);
    Insert into PM_REQUIREMENTS_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH_NUMBER,DESCRIPTION,FREQUENCY,CHECK_OR_MAINTENANCE,PRTANTAC_ID) values ('6310.19A','161K','Check spare modem operation','WEEKLY',null,17);
    REM INSERTING into LPM_PARAGRAPH_MAPPING_TABLE
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161A','161',26);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161A','161A-J',27);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161B','161',28);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161B','161A-J',29);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161C','161',30);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161C','161A-J',31);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161C','161(A-->K)',32);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161C','161(A-K)',33);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161C','161.(A-C).',34);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161D(1)','161',35);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161D(1)','161161 A-J',36);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161D(1)','161(A-->K)',37);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161D(1)','161(A-D)',38);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161D(2)','161',39);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161D(2)','161 A-J',40);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161D(2)','161(A-->K)',41);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161E','161E/H',42);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161F','161E/H',43);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161G','161E/H',44);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161H','161E/H',45);
    Insert into LPM_PARAGRAPH_MAPPING_TABLE (PUBLICATION_ORDER,PUBLICATION_PARAGRAPH,PARAGRAPH_ALIAS_MAPPING,LPMTANTAC_ID) values ('6310.19A','161K','161.K',46);Edited by: George Heller on Jul 15, 2011 11:09 AM

  • Accumulated SUM in SQL

    I was trying to generate the report from table SALGRADE in SQL script only. The report layout look like one based column and one accumulated sum column:
    Sales Accumulted Sum
    700 700
    1201 1901
    1201 3102
    1401 4503
    1401 5904
    2001 7905
    3001 10906
    The data source are from SALGRADE table in SCOTT/TIGER
    GRADE LOSAL
    1 700
    2 1201
    3 1401
    4 2001
    5 3001
    6 1401
    7 1201
    7 rows selected.
    But the following script is generated with wrong result in row 3 and 5:
    SQL> select b.grade,sum(c.losal) from salgrade b,salgrade c
    2 where b.grade >= c.grade
    3 group by b.grade
    4 /
    GRADE SUM(C.LOSAL)
    1 700
    2 1901
    3 3302
    4 5303
    5 8304
    6 9705
    7 10906
    7 rows selected.
    Is there a way to re-write the SQL to get correct result?
    Thanks
    David Wu
    null

    david,
    there seems to be a difference between the report you want to produce and the select you created.
    the report sorts the data by column LOSAL, while your select sorts(groups) them by GRADE. if you re-write your select to use LOSAL as join and group column, the result should look like your example.
    regards,
    the oracle reports team

  • How to call pl/sql function from element values

    EBS 11.5.10.2
    XMLP 5.6.3
    Hello,
    I noticed that the output of the rdf-to-data template conversion process makes use of an undocumented feature of data templates, and I would like to get input from experts as to which situations this feature is usable.
    The closest thing I can find in the documentation is a sample in the user guide. There is a "General Ledger Journals Data Template Example" that has a <dataStructure> section that contains <element> nodes which are NOT children of a <group> node. I can't find any explanation of this in the user guide.
    I've noticed from converted templates that in these un-grouped elements you can make calls to PL/SQL functions in the "value" attribute, like this:
    <dataStructure>
      <group name="G_LINES" source="Q_MAIN">
        <element name="Line_Num"           value="Line_Num"/>
      </group>
      <element name="C_CALCULATED_VALUE" dataType="number" value="XX_CUSTOMPROCS.SOME_FUNCTION"/>
    </dataStructure>Has anyone had any success being able to call PL/SQL functions from grouped elements? Whenever I try, it doesn't seem to work.
    When I try something like this:
    <dataStructure>
      <group name="G_LINES" source="Q_MAIN">
        <element name="Line_Num"           value="Line_Num"/>
        <element name="some_calculation"   value="XX_CUSTOMPROCS.SOME_FUNCTION"/>
        <element name="some_calculation_b" value="XX_CUSTOMPROCS.SOME_FUNCTION_B(:Line_Num)"/>
      </group>
      <element name="C_CALCULATED_VALUE" dataType="number" value="XX_CUSTOMPROCS.SOME_FUNCTION"/>
    </dataStructure>The <SOME_CALCULATION/> and <SOME_CALCULATION_B/> nodes come out empty in the output data xml file, but <C_CALCULATED_VALUE> would have a value as desired.

    ah - perfect. That makes sense. Thank you for the response!
    But what about when we need to pass parameters to those functions whos values are the results of aggregate element values?
    This happens a lot in the converted data templates, where pl/sql package functions are meant to replace formula columns from the original Oracle Report. Take this example from the conversion of ARXAGMW.rdf (Aging Report, 7 Buckets):
    (note the function call in the value of "Set_Percent_Inv_Inv" is using aggregate results from subgroups)
      <group name="G_INV_INV" dataType="varchar2" source="Q_Invoice">
        <element name="Total_Inv_Inv_Amt" function="sum" dataType="number" value="G_Invoice.C_Amt_Due_Rem_Inv"/>
        <element name="Total_Inv_Inv_B0" function="sum" dataType="number" value="G_Invoice.C_Inv_B0"/>
        <element name="Total_Inv_Inv_B1" function="sum" dataType="number" value="G_Invoice.C_Inv_B1"/>
        <element name="Total_Inv_Inv_B2" function="sum" dataType="number" value="G_Invoice.C_Inv_B2"/>
        <element name="Total_Inv_Inv_B3" function="sum" dataType="number" value="G_Invoice.C_Inv_B3"/>
        <element name="Total_Inv_Inv_B4" function="sum" dataType="number" value="G_Invoice.C_Inv_B4"/>
        <element name="Total_Inv_Inv_B5" function="sum" dataType="number" value="G_Invoice.C_Inv_B5"/>
        <element name="Total_Inv_Inv_B6" function="sum" dataType="number" value="G_Invoice.C_Inv_B6"/>
        <element name="Set_Percent_Inv_Inv"  dataType="number"  value="XX_CUSTOMPROCS.XXC_ARXAGMW.set_percent_inv_invformula(:Total_Inv_Inv_Amt, :Total_Inv_Inv_B0, :Total_Inv_Inv_B1, :Total_Inv_Inv_B2, :Total_Inv_Inv_B3, :Total_Inv_Inv_B4, :Total_Inv_Inv_B5, :Total_Inv_Inv_B6)"/>
        <element name="Sum_Percent_B0_Inv" dataType="number" value="XX_CUSTOMPROCS.XXC_ARXAGMW.Sum_Percent_B0_Inv_p"/>
        <element name="Sum_Percent_B1_Inv" dataType="number" value="XX_CUSTOMPROCS.XXC_ARXAGMW.Sum_Percent_B1_Inv_p"/>
        <element name="Sum_Percent_B2_Inv" dataType="number" value="XX_CUSTOMPROCS.XXC_ARXAGMW.Sum_Percent_B2_Inv_p"/>
        <element name="Sum_Percent_B3_Inv" dataType="number" value="XX_CUSTOMPROCS.XXC_ARXAGMW.Sum_Percent_B3_Inv_p"/>
        <element name="Sum_Percent_B4_Inv" dataType="number" value="XX_CUSTOMPROCS.XXC_ARXAGMW.Sum_Percent_B4_Inv_p"/>
        <element name="Sum_Percent_B5_Inv" dataType="number" value="XX_CUSTOMPROCS.XXC_ARXAGMW.Sum_Percent_B5_Inv_p"/>
        <element name="Sum_Percent_B6_Inv" dataType="number" value="XX_CUSTOMPROCS.XXC_ARXAGMW.Sum_Percent_B6_Inv_p"/>
        <group name="G_Cust_Inv" dataType="varchar2" source="Q_Invoice">
          <group name="G_Site_Inv" dataType="varchar2" source="Q_Invoice">
            <group name="G_1" dataType="varchar2" source="Q_Invoice">
              <group name="G_Invoice" dataType="varchar2" source="Q_Invoice">
                <element name="C_Amt_Due_Rem_Inv" dataType="number" value="C_Amt_Due_Rem_Inv"/>
                <element name="C_Inv_B0" dataType="number" value="C_Inv_B0"/>
                <element name="C_Inv_B1" dataType="number" value="C_Inv_B1"/>
                <element name="C_Inv_B2" dataType="number" value="C_Inv_B2"/>
                <element name="C_Inv_B3" dataType="number" value="C_Inv_B3"/>
                <element name="C_Inv_B4" dataType="number" value="C_Inv_B4"/>
                <element name="C_Inv_B5" dataType="number" value="C_Inv_B5"/>
                <element name="C_Inv_B6" dataType="number" value="C_Inv_B6"/>
              </group>
            </group>
          </group>
        </group>
      </group>
      ...All of these groups and sub-groups are based on one single query, so I am not sure how I would move the function call into the query without changing the results of the function.
    In the example above, elements Sum_Percent_B0_Inv through Sum_Percent_B6_Inv grab the results of the calculation done in set_percent_inv_invformula. Here is the essence of that function:
      sum_percent_b0_inv := ROUND ((total_inv_inv_b0 / total_inv_inv_amt) * 100, 2);
      sum_percent_b1_inv := ROUND ((total_inv_inv_b1 / total_inv_inv_amt) * 100, 2);
      sum_percent_b2_inv := ROUND ((total_inv_inv_b2 / total_inv_inv_amt) * 100, 2);
      sum_percent_b3_inv := ROUND ((total_inv_inv_b3 / total_inv_inv_amt) * 100, 2);
      sum_percent_b4_inv := ROUND ((total_inv_inv_b4 / total_inv_inv_amt) * 100, 2);
      sum_percent_b5_inv := ROUND ((total_inv_inv_b5 / total_inv_inv_amt) * 100, 2);
      sum_percent_b6_inv := ROUND ((total_inv_inv_b6 / total_inv_inv_amt) * 100, 2);The only solution I can think of is to have separate queries, one for each subgroup, that do the "sum" in sql; but that seems terribly inefficient.

  • Dao and sql

    Dear All,
    In SQL we can find out the SUM of some datas, or we can get the datas by using select query and do the sum in DAO classes using StingBuffer.
    i need to know here,
    If i write SUM in query, is it processing speed will be low? or if i make sum in dao classes is it processing speed is faster? which one is good for better performance compared, writing in DAO classes is better or Sum in sql query is better?
    Thanks a lot in Adv

    It is entirely dependent on your design. If you fetch the records from two different tables via JDBC with no cache, the performance would be slower than doing the sum in the DB engine over large intervals based on the complexity of the query etc. This does not merit doing it with SUM though, it is entirely a design decision.

  • Sum by row

    Hello folks,
    I am looking for a way to get a sum column in this query:
    select *
      from (select 1 as ID, 10 as Value, '1.1.2001' as dat_e
              from dual
            union all
            select 2, 15, '2.2.2002'
              from dual
            union all
            select 3, 10, '3.3.2003'
              from dual
              union all
            select 4, 10, '4.4.2004'
              from dual)The result I am looking for is a column that adds values like this:
    10
    25
    35
    45
    Best regards,
    Igor

    Analytic version of the SUM function :
    SQL> with t as  (
      2    select 1 as ID, 10 as Value, '1.1.2001' as dat_e from dual union all
      3    select 2, 15, '2.2.2002'          from dual        union all
      4    select 3, 10, '3.3.2003'          from dual        union all
      5    select 4, 10, '4.4.2004'          from dual
      6  )
      7  select id
      8       , sum(value) over(order by id) as cumul_val
      9  from t ;
            ID  CUMUL_VAL
             1         10
             2         25
             3         35
             4         45

  • Add Time value to a Date value

    Hi
    COLUMN A is a DATE TYPE i.e. *14-MAR-13 12:30*
    COLUMN B is a CHAR Type for hours and minutes: i.e. *01:45*.
    I'm trying to sum (in SQL) the time in Col A with Col B, the result should be: 14:15
    I'm not too sure how to proceed
    THanks
    Ruben

    Ruben_920841 wrote:
    Hi Frank,
    Thank you for your quick reply.
    However, I'm not sure I understand why you insert "TRUNC (b_date)" in your code below. Can you please explain?
    >
    WITH     got_b_date     AS
         SELECT     a, b
         ,     TO_DATE (b, 'HH24:MI')     AS b_date
         FROM     table_x
    SELECT     a, b
    ,     a + ( b_date
         - TRUNC (b_date)
         )      AS a_plus_b
    FROM table_x
    >TO_DATE (b, 'HH24:MI') converts a string (such as '01:45') into a DATE (such as 1:45 AM on April 1, 2013). Since we're not passing a day of the month, month or year, those all take default values.
    If d is a DATE, then TRUNC (d) is the earliest DATE (miodnight) on the same calendar day.
    If d1 and d2 are DATEs, then d1 - d2 is the number of days (not necessarily an integer) that d1 is after d2.
    If d is a DATE and n is a NUMBER (not necessarily an integer), then d + n is a DATE n days after d.
    Put this all together and what do we have, given that b is '01:45'
    b_date is 1:45 AM on some default day (for this solution, it doesn't matter which day)
    TRUNC (b_date) is midnight on that same calendar day.
    b_date - TRUNC (b_date) is a NUMBER, the number of days that 1:45 is after midnight (1:45 is 1.75 hours, and a day is 24 hours, so that comes to 1.75 / 24 = .072916666... days.)
    a + (b_date - TRUNC (b_date)) is a DATE, .072916666... days (that is, 1.75 hours) after a.
    Notice that Solomon did almost the same thing, only Solomon used the fact that the default day of the month in TO_DATE is 1, and that the default month and year are the same month and year that SYSDATE returns, so b_date - TRUNC (b_date) is the same as b_date - TRUNC (SYSDATE, 'MONTH').

  • Need to calculate subtotal and print it on basis of dc,store and lid number

    Hi Expert
    I need to calculate sub total for couple of fields on the basis of DC, store and lid number. I have clculated the sub total in query level but while printing it in the report i am facing the real problem.
    Please find the example:
    For DC=1111, store=2222, lid no=3333
    QTY rcvd qty ordered qty available REFNO
    20 300 680 7859
    45 258 348 4528
    For DC=1111, store=2222, lid no=3334
    QTY rcvd qty ordered qty available REFNO
    58 78 69 1487
    45 54 74 2578
    i want the sum in below of completion of one set of values. but as i am calculating the sum in sql query , i put it in a group and it is appearing in the end of every line which is not my desired.
    Kindly let me know how to do the grouping so that i can get it in the end of each combination of DC,store, lid no.
    Thanks in advance.
    Regards
    Srikant

    Can you send me the xml and RTF template to [email protected] ? I will take a look.
    Thanks,
    Bipuser

  • Column Not Displayed in Universe

    Hi,
    I can't see column in Universe. I'm connecting to SAP Tables using ODBC. My database is MaxDB. My platform is Windows 32-bit. I'm using BO Edge XI 3.1. Here is my original PRM file.
    <?xml version="1.0" encoding="UTF-8"?><!DOCTYPE DBParameters SYSTEM "../dbparameters.dtd"><DBParameters>
         <Configuration>
              <Parameter Name="DB_TYPE">GENERIC</Parameter>
              <Parameter Name="SORT_BY_NO">YES</Parameter>
              <Parameter Name="GROUPBYCOL">NO</Parameter>
              <Parameter Name="EXT_JOIN">NO </Parameter>
              <Parameter Name="CONCAT">+</Parameter>
              <Parameter Name="UNION"></Parameter>
              <Parameter Name="UNION_IN_SUBQUERY"></Parameter>
              <Parameter Name="INTERSECT"></Parameter>
              <Parameter Name="INTERSECT_IN_SUBQUERY"></Parameter>
              <Parameter Name="MINUS"></Parameter>
              <Parameter Name="MINUS_IN_SUBQUERY"></Parameter>
              <Parameter Name="OWNER">Y</Parameter>
              <Parameter Name="QUALIFIER">Y</Parameter>
              <Parameter Name="COMMA">' '</Parameter>
              <Parameter Name="NO_DISTINCT">Y</Parameter>
              <Parameter Name="REFRESH_COLUMNS_TYPE">T</Parameter>
              <Parameter Name="CHECK_OWNER_STATE">Y</Parameter>
              <Parameter Name="CHECK_QUALIFIER_STATE">Y</Parameter>
              <Parameter Name="KEY_INFO_SUPPORTED">N</Parameter>
              <Parameter Name="OUTERJOINS_GENERATION">NO</Parameter>
              <Parameter Name="EVAL_WITHOUT_PARENTHESIS">N</Parameter>
              <Parameter Name="USER_INPUT_DATE_FORMAT">{\d 'yyyy-mm-dd'}</Parameter>
            <Parameter Language="ja" Name="USER_INPUT_DATE_FORMAT">{!d 'yyyy-mm-dd'}</Parameter>
              <Parameter Name="USER_INPUT_NUMERIC_SEPARATOR">.</Parameter>
         </Configuration>
         <DateOperations>
              <DateOperation Name="YEAR">{fn year($D)}</DateOperation>
              <DateOperation Name="MONTH">{fn month($D)}</DateOperation>
         </DateOperations>
         <Operators>
              <Operator Arity="1" ID="ADD" Type="Numeric">+</Operator>
              <Operator Arity="1" ID="SUBSTRACT" Type="Numeric">-</Operator>
              <Operator Arity="1" ID="MULTIPLY" Type="Numeric">*</Operator>
              <Operator Arity="1" ID="DIVIDE" Type="Numeric">/</Operator>
              <Operator Arity="0" ID="NOT_NULL" Type="Logical">IS NOT NULL</Operator>
              <Operator Arity="0" ID="NULL" Type="Logical">IS NULL</Operator>
              <Operator Arity="1" ID="SUP" Type="Logical">&gt;=</Operator>
              <Operator Arity="1" ID="INF" Type="Logical">&lt;=</Operator>
              <Operator Arity="1" ID="EQUAL" Type="Logical">=</Operator>
              <Operator Arity="1" ID="DIFF" Type="Logical">&lt;&gt;</Operator>
              <Operator Arity="1" ID="STRICT_SUP" Type="Logical">&gt;</Operator>
              <Operator Arity="1" ID="STRICT_INF" Type="Logical">&lt;</Operator>
              <Operator Arity="1" ID="IN_LIST" Type="Logical">IN</Operator>
              <Operator Arity="1" ID="NOT_IN_LIST" Type="Logical">NOT IN</Operator>
              <Operator Arity="1" ID="MATCH" Type="Logical">LIKE</Operator>
              <Operator Arity="1" ID="NOT_MATCH" Type="Logical">NOT LIKE</Operator>
              <Operator Arity="2" ID="BETWEEN" Type="Logical">BETWEEN  AND</Operator>
              <Operator Arity="2" ID="NOT_BETWEEN" Type="Logical">NOT BETWEEN  AND</Operator>
         </Operators>
         <Functions>
              <Function Distinct="False" Group="True" ID="Minimum" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>min($1)</SQL>
              </Function>
              <Function Distinct="False" Group="True" ID="Maximum" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>max($1)</SQL>
              </Function>
              <Function Distinct="False" Group="True" ID="Average" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>avg($1)</SQL>
              </Function>
              <Function Distinct="False" Group="True" ID="Sum" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>sum($1)</SQL>
              </Function>
              <Function Distinct="False" Group="True" ID="Count" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="All"></Argument>
                   </Arguments>
                   <SQL>count($1)</SQL>
              </Function>
              <Function Group="False" ID="ASCII_code" InMacro="False" Type="String">
                   <Arguments>
                        <Argument Type="Char"></Argument>
                   </Arguments>
                   <SQL>{fn ascii($1)}</SQL>
              </Function>
              <Function Group="False" ID="Character" InMacro="False" Type="String">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn char($1)}</SQL>
              </Function>
              <Function Group="False" ID="Concat" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>{fn concat($1,$2)}</SQL>
              </Function>
              <Function Group="False" ID="Left" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn left($1,$2)}</SQL>
              </Function>
              <Function Group="False" ID="LeftRemove" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>{fn ltrim($1)}</SQL>
              </Function>
              <Function Group="False" ID="Length" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>{fn length($1)}</SQL>
              </Function>
              <Function Group="False" ID="Locate" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="String"></Argument>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>{fn locate($1,$2)}</SQL>
              </Function>
              <Function Group="False" ID="Lowercase" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>{fn lcase($1)}</SQL>
              </Function>
              <Function Group="False" ID="Repeat" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn repeat($1,$2)}</SQL>
              </Function>
              <Function Group="False" ID="Rightpart" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn right($1,$2)}</SQL>
              </Function>
              <Function Group="False" ID="Rtrim" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>{fn rtrim($1)}</SQL>
              </Function>
              <Function Group="False" ID="Substring" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                        <Argument Type="Numeric"></Argument>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn substring($1,$2,$3)}</SQL>
              </Function>
              <Function Group="False" ID="Uppercase" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>{fn ucase($1)}</SQL>
              </Function>
              <Function Group="False" ID="Absolute" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn abs($1)}</SQL>
              </Function>
              <Function Group="False" ID="Arc_cosine" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn acos($1)}</SQL>
              </Function>
              <Function Group="False" ID="Arc_sine" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn asin($1)}</SQL>
              </Function>
              <Function Group="False" ID="Arc_tangent" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn atan($1)}</SQL>
              </Function>
              <Function Group="False" ID="Angle_Tangent_2" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn atan2($1,$2)}</SQL>
              </Function>
              <Function Group="False" ID="Cosine" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn cos($1)}</SQL>
              </Function>
              <Function Group="False" ID="Ceil" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn ceiling($1)}</SQL>
              </Function>
              <Function Group="False" ID="Exp" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn exp($1)}</SQL>
              </Function>
              <Function Group="False" ID="Floor" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn floor($1)}</SQL>
              </Function>
              <Function Group="False" ID="Log" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn log($1)}</SQL>
              </Function>
              <Function Group="False" ID="Mod" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn mod($1,$2)}</SQL>
              </Function>
              <Function Group="False" ID="Pi" InMacro="False" Type="Numeric">
                   <SQL>{fn pi()}</SQL>
              </Function>
              <Function Group="False" ID="Random" InMacro="False" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn rand($1)}</SQL>
              </Function>
              <Function Group="False" ID="Sign" InMacro="False" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn sign($1)}</SQL>
              </Function>
              <Function Group="False" ID="Sine" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn sin($1)}</SQL>
              </Function>
              <Function Group="False" ID="Sqrt" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn sqrt($1)}</SQL>
              </Function>
              <Function Group="False" ID="Tangent" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>{fn tan($1)}</SQL>
              </Function>
              <Function Group="False" ID="Character_prompt" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>@prompt($1,'A',,,)</SQL>
              </Function>
              <Function Group="False" ID="Numeric_prompt" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>@prompt($1,'N',,,)</SQL>
              </Function>
              <Function Group="False" ID="Date_prompt" InMacro="True" Type="DateTime">
                   <Arguments>
                        <Argument Type="String"></Argument>
                   </Arguments>
                   <SQL>@prompt($1,'D',,,)</SQL>
              </Function>
         </Functions>
    </DBParameters>
    Assistance is much appreciated. Thank you.
    Rgds,
    Hapizorr

    Hi
    this is a rather unconventional and not officially supported way to access SAP data. Some of the table fields as you see them in SAP R3 do not have corresponding fields in the underlying database.
    If you want to build reports on SAP R3 table you have the following options:
    1) Use the SAP rapid marts and extract your R3 data into a DWH based on a relational database. You also get a universe on which you can build your reports.
    2) You can use Crystal Reports and the native R3 table driver
    3) There is also a prototype (This is NOT a product) R3 connector for the Data FEderator that will allo you to build relational universes on SAP R3 sources.
    4) Load your data into a BW system and build OLAP universes on top of BEx queries.
    Regards,
    Stratos

  • GROUP BY in Desktop Intelligence

    Question: Is there a way to use GROUP BY statement at the Universe level?
    Detailed Example: I have Cricket Teams and Team Members. I want to display all the team members for a team in a single cell as comma separated values. At database level, I can achieve this by using an oracle function (stringagg) as follows:
    SELECT Team, stringagg(Team_Members)
    FROM Cricket_Team
    GROUP BY Team
    In Business Objects Universe, I have created two objects:
    1> Team as Cricket_Team.Team
    2> Team Members as stringagg(Team_Members)
    When I drag these two objects in the report I get DA0005 error (Exception: DBD, ORA-00937: not a single-group group function
    State: N/A)
    This is because the SQL generated by Desktop Intelligence is
    SELECT Team, stringagg(Team_Members)
    FROM Cricket_Team
    I can fix this at reporting level by customizing the SQL, but I want this to be fixed at the Universe level so that all the users can get the Team_Members list in comma separated values for each Team.
    So, is there a way to get this GROUP BY Team statement at the Universe level?

    Hello,
    You cannot do this in the universe.
    As we are using BO6.5 the solution written below is based on that, but because deski hasn't changed much since then it still might work the same (but you need to check for yourself).
    What you can do is adjust the prm-file (parameter file) for Oracle (oracle.prm).
    You can find it in the folder (=our default installation folder for BO6.5): C:\Program Files\Business Objects\BusinessObjects Enterprise 6\dataAccess\RDBMS\connectionServer\oracle
    In the file the functions are listed which generate a GROUP BY.
    <Function Group="False" ID="Substring" InMacro="True" Type="String">
                   <Arguments>
                        <Argument Type="String"></Argument>
                        <Argument Type="Numeric"></Argument>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>substr($1,$2,$3)</SQL>
    </Function>
    <Function Group="True" ID="Sum" InMacro="True" Type="Numeric">
                   <Arguments>
                        <Argument Type="Numeric"></Argument>
                   </Arguments>
                   <SQL>sum($1)</SQL>
    </Function>
    Above 2 examples from the oracle.prm file whe 'Substring' will not generate a GROUP BY and 'Sum' will.
    What you could do is add the 'stringagg' function (after making a copy of the original oracle.prm file) and see if it works.
    Regards,
    Harry

  • Group by with outer join

    Group by sum doesn't work with outer join. Can anyone please help me to get it right?
    I've posted some sample data and queries below:
    CREATE TABLE COMPLAINT
      CNO     NUMBER,
      REASON  VARCHAR2(15 BYTE),
      TOTAL   NUMBER
    Insert into COMPLAINT
       (CNO, REASON, TOTAL)
    Values
       (1, 'edge', 250);
    Insert into COMPLAINT
       (CNO, REASON, TOTAL)
    Values
       (2, 'edge', 250);
    Insert into COMPLAINT
       (CNO, REASON, TOTAL)
    Values
       (3, 'brst', 300);
    Insert into COMPLAINT
       (CNO, REASON, TOTAL)
    Values
       (4, 'crea', 400);
    COMMIT;
    CREATE TABLE SCOTT.COMPLAINTROLL
      CNO   NUMBER,
      ROLL  VARCHAR2(15 BYTE)
    SET DEFINE OFF;
    Insert into COMPLAINTROLL
       (CNO, ROLL)
    Values
       (2, 'roll22');
    Insert into COMPLAINTROLL
       (CNO, ROLL)
    Values
       (1, 'roll4');
    Insert into COMPLAINTROLL
       (CNO, ROLL)
    Values
       (1, 'roll3');
    Insert into COMPLAINTROLL
       (CNO, ROLL)
    Values
       (1, 'roll2');
    Insert into COMPLAINTROLL
       (CNO, ROLL)
    Values
       (1, 'roll1');
    COMMIT;
    select * from complaint
    CNO     REASON     TOTAL
    1     edge     250
    2     edge     250
    3     brst     300
    4     crea     400
    select * from complaintroll
    CNO     ROLL
    1     roll1
    1     roll2
    1     roll3
    1     roll4
    2     roll22
    -- total of reason code edge is 500
    select reason,sum(total)
    from complaint c
    group by reason
    REASON     SUM(TOTAL)
    brst     300
    crea     400
    edge     500
    -- total of reason code edge after outer join is 1250
    select reason,sum(total)
    from complaint c,complaintroll cr
    where c.cno=cr.cno(+)
    group by reason
    REASON     SUM(TOTAL)
    brst     300
    crea     400
    edge     1250
    {\code}
    Thanks for reading this post.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   

    The problem that you described is simple. The outer join duplicates all the rows from the parent table (complaint). If you want to sum a column from the parent table, then this sum includes all the duplicated rows.
    There are several solutions for this problem.
    A) One had been shown already by Cenutil. Instead of doing an outer join, you can do a subquery in the select clause that delivers the additional information from the detail table.
    SQL> select reason,
       sum(total),
       sum((select count(*) from complaintroll cr where c.cno=cr.cno)) cnt_rolls
    from complaint c
    group by c.reason;
    REASON          SUM(TOTAL)  CNT_ROLLS
    crea                   400          0
    brst                   300          0
    edge                   500          5b) sum in two steps. First sum and count including the join criteria, then use this information to calculate the correct total sum.
    SQL> select reason, sum(stotal), sum(stotal/scount), sum(scount), sum(cnt_rolls)
      2  from (select reason, sum(total) stotal, count(*) scount, count(cr.cno) cnt_rolls
      3         from complaint c
      4         left join complaintroll cr on c.cno=cr.cno
      5         group by reason, c.cno
      6         )
      7   group by reason;
    REASON          SUM(STOTAL) SUM(STOTAL/SCOUNT) SUM(SCOUNT) SUM(CNT_ROLLS)
    crea                    400                400           1              0
    brst                    300                300           1              0
    edge                   1250                500           5              5
    sql> c) another option is to do the left join, but do the aggregation only one time for the parent table. Analytic functions are helpful for that. However since analytic fuinctions can't be used inside an aggregation function, we would again need an inline view.
    SQL> select reason, sum(case when rn = 1 then total end) sum_total, count(*), count(crcno)
      2  from (select row_number() over (partition by c.reason order by c.cno) rn,
      3                   c.*, cr.cno crcno
      4         from complaint c
      5         left join complaintroll cr on c.cno=cr.cno
      6         )
      7  group by reason;
    REASON           SUM_TOTAL   COUNT(*) COUNT(CRCNO)
    brst                   300          1            0
    crea                   400          1            0
    edge                   250          5            5
    SQL> Edited by: Sven W. on Feb 10, 2011 1:00 PM - formatting + column added to 2nd option

  • ORACLE8 OPS TUNING

    제품 : ORACLE SERVER
    작성날짜 : 2004-08-13
    ORACLE8 OPS TUNING
    ====================
    PURPOSE
    이 자료는 OPS 환경에서의 db tuning에 대한 설명자료입니다.
    SCOPE
    Standard Edition 에서는 Real Application Clusters 기능이 10g(10.1.0) 이상 부터 지원이 됩니다.
    Explanation
    OPS 튜닝에 있어 단일 인스턴스에서의 튜닝 요소(buffer cache, shared pool,
    disk 등)들은 여전히 중요한 의미를 가지고 있지만, OPS 환경에서의 추가적인
    튜닝 요소들에 대한 이해 역시 필요하다.
    튜닝 파라미터에 대한 적절한 튜닝으로 시스템 성능 개선을 얻을 수 있지만,
    LM lock contention에 대한 부정확한 분석 등으로 야기된 문제를 해결해 주지는
    못한다.
    OPS에서의 tuning 은 shared resource 간의 contention의 최소화로 maximum
    performance를 내는 것이 중요하다.
    1. contention bottlenecks
    1) data block : pinging, false pinging
    만일 multiple instance가 동시에 같은 data block을 update할 경우 instance
    간에 이 block의 pinging이 발생할 것이다. pcm lock이 관장하는 block이
    많다면 false pinging발생이 증가 할 것이다.
    <참고> Pinging : 한 instance가 다른 instance가 필요로 하는 data block을 수
    정하였다면 다른 instance가 이 block을 읽기 전에 disk에 write해야
    하는데 이러한 동작을 pinging이라 한다.
    False pinging : 서로 다른 인스턴스에서 서로 다른 블럭들에 대한 요
    청을 하는데, 이것이 동일한 PCM lock에 의해 처리될 때 발생한다.
    이와 같은 pinging은 PCM lock의 처리 단위를 줄임으로써 발생하지
    않을 수도 있기 때문에 불필요한 ping이라고 할 수 있다.
    2) rollback segment : read consistency
    만일 한 instance에서 DML transaction이 rollback 정보를 만들었고 다른
    instance에서 read consistency를 위해 이 rollback 정보가 필요하다면
    rollback segment block에 contention이 있을 것이다.
    3) segment header : freelist contention
    multiple transaction이 동시에 object(table,index,cluster)에 insert 시
    segment의 header에 즉 freelist에 contention이 발생할 것이다. instance가
    segment header에 pinging이 발생할 것이다.
    4) device contention
    multiple instance가 동시에 같은 disk에 write 시 contention이 발생할 것이
    다.
    2. Lock Conversion 진단
    Lock이 upgrade/downgrade 되는 lock conversion 작업에 대해 V$LOCK_ACTIVITY,
    V$SYSSTAT의 정보를 조회함으로써 lock contention 문제를 진단할 수 있다.
    Lock convert 작업이 얼마만큼 자주 일어나는지를 판단하기 위해서는 데이터 블럭을
    읽거나 수정하는 transaction에서 얼마만큼의 lock convert 작업이 필요한지를
    계산해야 한다.
    이것을 lock hit ratio로 수치화 할 수 있는데, 계산 방법은 전체 데이터 블럭에
    대한 access 중, lock convert 작업이 필요하지 않은 데이터 블럭에 대한 비율을
    구하면 된다.
    Lock hit ratio = (consistent_gets - global_lock_converts(async)) / consistent_gets
    이 값은 다음 sql 문장을 수행시켜 구해낼 수 있다.
    SELECT      (b1.value - b2.value) / b1.value ops_ratio
    FROM      V$SYSSTAT b1, V$SYSSTAT b2
    WHERE      b1.name = `consistent gets'
    AND      b2.name = `global lock converts (async)';
    이 값이 95% 미만으로 나오면 노드를 추가하여 얻을 수 있는 성능 향상을 충분히
    활용하지 못하는 상태이다.
    3. View를 통한 parallel server의 모니터/ 튜닝
    1) V$LOCK_ACTIVITY에 대한 분석
    분산 lock에 대해서는 다음과 같은 절차를 따라 모니터와 튜닝 작업을 한다.
    (가) 각각의 인스턴스들에 대해 다음 query를 연속적으로 시행한다.
    SELECT * FROM V$LOCK_ACTIVITY;
    (나) 만약 어느 한 인스턴스라도 lock conversion이 급격하게 증가를 한다면, 다음
    SQL 문장을 실행시켜 가장 많이 발생하는 lock conversion의 종류를 찾아낸다.
    SELECT * FROM V$LOCK_ACTIVITY;
    가장 많이 발생하는 lock conversion이 X 에서 낮은 단계 (예: X -> S, X -> Null,
    X -> SSX, S -> N)로 변환되는 것이라면, 이것은 buffer cache의 블럭 (ping이
    되는 블럭)에 대한 인스턴스들의 contention을 의미하며, 인스턴스가 다른 인스턴스의
    요청에 의해 lock 을 release하는 상황이다. 이 때 해당 인스턴스에서 conversion
    횟수가 급격하게 증가하는지 여부를 모니터해야 한다.
    (다) 각각의 인스턴스에서 V$LOCK_ACTIVITY를 조회해서 어느 인스턴스에서 가장 많이 NULL -> S, S -> S,
    S-> X 변환이 많이 일어나는지를 조사한다. 이와 같은 변환이 가장 많이 일어나는 인스턴스는, 다른 인스턴스에
    의해 이미 lock이 걸린 데이터를 가장 많이 요청(ping)하는 인스턴스이다.
    만약 ping이 주로 두개의 인스턴스간에 발생한다면 두 노드에서 실행되는 application을 한 노드에서 실행
    되도록 조정하는 것을 고려해야 한다. ((사) 참조) 만약 ping이 여러 노드간에 골고루 일어난다면 PCM lock
    할당 및 애플리케이션에 대한 튜닝 작업이 필요하다. ((아) 참조)
    (라) V$PING을 조회하여 어느 블럭들이 가장 ping이 되는지를 조사한다.
    SELECT * FROM V$PING;
    가장 많이 ping이 발생하는 목록만을 추려서 조회해 보고 싶으면 다음과 같이 조건을 추가할 수 있다.
    SELECT * FROM V$PING
    WHERE FORCED_READS > 10 OR FORCED_WRITES > 10;
    SELECT NAME, KIND, STATUS, SUM(FORCED_READS),
    SUM(FORCED_WRITES)
    FROM V$PING
    GROUP BY NAME, KIND, STATUS ORDER BY SUM(FORCED_READS);
    (V$BH를 조회하는 것이 V$PING이나 V$CACHE를 조회하는 것 보다 빠르게 실행된다. V$BH를 실행해서 블럭
    number나 file number등을 조회한 후, OBJ$와 join을 해서 object의 이름을 찾아낼
    수 있다.
    SELECT      O.NAME, BH.* FROM
    FROM           V$BH BH, OBJ$ O
    WHERE     O.OBJ# = BH.OBJD
    AND          (BH.FORCE_READS > 10 OR BH.FORCED_WRITES > 10);
    (마) ping이 가장 많이 발생하는 블럭에 대해, GC_FILES_TO_LOCKS에 지정된 데이터파일의 FILE#와 비교해서
    PCM lock이 여러 블럭에 대해 lock을 거는지 알아낸다. 만약 그렇다면 lock이 여러 파일의 블럭들에 대해
    lock을 거는지를 조사한다.
    (바) 하나의 PCM lock이 여러 블럭을 대해 lock을 건다면, 다른 인스턴스에서 이미 lock이 걸린 블럭, 또는
    해당 블럭은 아니더라도 하나의 PCM lock에 의해 lock이 걸린 블럭을 요청하고 있는지를 조사해야 한다.
    (사) 만약 해당 블럭이 다른 인스턴스에서 나타나지 않는다면, 이것은 불필요한 contention (false pinging)이
    발생하는 것을 뜻한다. 만약 다른 인스턴스에서 해당 블럭은 아니지만, 같은 PCM lock에 의해 관리되는
    블럭을 요청한다면, 같은 PCM lock이 요청되기 때문이다. 하나 혹은 그 상의 데이터파일에서 불필요한
    contention을 최소화 시키기 위해서는 GC_FILES_TO_LOCKS 파라미터 값을 늘려 좀더 많은 PCM lock을
    할당함으로써 PCM lock당 처리되는 블럭의 갯수를 줄여야 한다.
    (아) 여러 인스턴스의 buffer cache에서 동일한 블럭이 여러차례 나타난다면 이것은 인스턴스 들이 동일한
    데이터에 대해 contention 발생에 대한 결과이다. 여러 instnace에서 동일한 블럭에 대한 변경이 필요할
    때, 그 데이터를 처리하는 애플리케이션을 한쪽 노드에서 실행시켜 성능 향상을 기할 수 있다. 그리고
    여러 인스턴스들이 동일한 블럭 내의 다른 row에 있는 자료를 변경하려는 경우에는 테이블을 FREELISTGROUPS
    storage 옵션을 사용하여 재 생성한 후, 특정 인스턴스에 extent를 할당한 후 적절한 extent에서 선택적
    으로 update가 일어날 수 있도록 조치하는 것이 좋다. 작은 테이블에 대해서는 PCTFREE, PCTUSED 값을
    사용하여 한개의 블럭이 한개의 row만을 포함하도록 조절하여 성능 향상을 기할 수 있다.
    만약 row에 대한 contention이 고유한 숫자를 생성하기 위한 것이라면 애플리케이션에서 SEQUENCE를 사용
    하도록 조정하여 contention을 줄이도록 해야 한다. (데이터 블럭이나 다른 고유 자원에 대한 contention은
    반드시 성능에 심각한 영향을 미치는 것은 아니다.
    만약 애플리케이션의 response time이 문제가 될 정도가 아니고, 시스템 사용이 크게 늘어날 상황이 아니라면
    parallel server에 대한 튜닝을 하지 않아도 될 상황일 수도 있다.)
    2) Ping을 진단하기 위한 V$PING 조회
    (가) V$PING을 조회하여 lock conversion에 대한 종합 통계를 조회한다.
    SQL> SELECT NAME, FILE#, CLASS#, MAX(XNC) FROM V$PING
    GROUP BY NAME, FILE#, CLASS#
    ORDER BY NAME, FILE#, CLASS#;
    NAME      FILE#      CLASS#     MAX(XNC)
    DEPT      8      1      492
    DEPT      8      4           10
    EMP      8      1      3197
    EMP      8      4      29
    (나) File# 8 의 블럭에 대해 PCM lock의 빈도를 조사하기 위해 V$PING을 다시한번 조회한다.
    SQL> SELECT * FROM V$PING WHERE FILE# = 8;
    SQL> SELECT * FROM V$PING WHERE FILE# = 8;
    FILE# BLOCK#      STAT      XNC      CLASS# NAME KIND
    8 98 XCUR      450 1 EMP      TABLE
    8      764 SCUR 59 1 DEPT TABLE
    (다) 98번 블럭에 속하는 EMP table의 row들을 구한다. BLOCK# 값을 16진수 값으로 바꾸고 ROWID 값과 비교한다.(98은 16진수로 62임)
    SQL> SELECT ROWID, EMPNO, ENAME FROM EMP
    WHERE chartorowid(rowid) like '00000062%';
    ROWID      EMPNO      ENAME
    00000062.0000.0008      12340           JONES
    00000062.0000.0008      6491           CLARK
    3) V$CLASS_PING, V$FILE_PING, V$BH에 대한 조회
    contention을 가장 많이 유발시키는 요인을 파일별로 혹은 블럭 class별로 구분할 수 있다.
    - V$CLASS_PING
    (가) 어떤 class의 블럭이 (예 rollback segment) 가장 많이 ping이 되는지를 조
    사하는데 유용하다.
    (나) lock conversion type별로 (예 Null -> Shared), 혹은 conversion에 의해 발생
    한 physical I/O의 read와 write별로 세분해서 조회할 수 있게 한다.
    (다) 인스턴스 기동 후부터 지금까지의 누적 수치를 나타낸다.
    (라) Contention을 분산하기 위해서는 다른 종류의 블럭 class들을 서로 다른
    파일에 위치시키는 것이 좋다. 예를 들어 rollback segment와 데이터파일
    을 서로 다른 파일에 두는 등의 작업을 할 수 있다.
    - V$FILE_PING
    (가)가장 많이 ping이 되는 파일이 어느 파일인지를 구분할 수 있게 해 준다.
    (나) 인스턴스 시작후 누적된 통계 값이다.
    (다) Contention을 분산시키기 위해서는 가장많이 ping이 되는 파일에 들어
    있는 object들을 다른 파일로 옮기는 것을 고려해 볼 수 있다. 만약 특
    정 table에서 ping이 많이 발생한다면 table에 대한 partition을 고려하거
    나 partigion을 다른 파일로 옮기는 등의 작업을 할 수 있다.
    - V$BH
    (가) 주어진 시점에서 buffer cache에 대한 snapshot 역할을 한다. 주기적으로
    V$BH를 조회해서 변동의 추이를 조사해야 한다.
    (나) 통계값은 인스턴스 시작 이후로의 누적 수치가 아니라 한 시점에서의
    통계값이다. 시스템 운영 시간동안 주기적으로 점검해서 ping에 대한 정
    보를 수집하고, ping과 관련된 buffer cache내의 object를 밝혀내고, ping
    에 의해 야기되는 forced read/write I/O를 조사할 수 있다.
    (다) V$BH는 object id컬럼이 있어, OBJ$와 join을 해서 object 이름을 구할
    수 있다.
    ( Global dynamic performance view (GV$)에는 V$CLASS_PING, V$FILE_PING, V$BH에 대응하는 GV$CLASS_PING, GV$FILE_PING, GV$BH가 있다.)
    4) V$WAITSTAT을 이용한 contention 모니터
    rollback segment나 free list등 블럭 contention에 관한 통계 정보를 구하는데 사용할 수 있다.
    - Free List의 블럭에 대한 contention 모니터
    Free list에 대한 contention문제에 대해서는 다음과 같은 단계를 거쳐 조치한다.
    (가) Free list의 free 블럭에 대한 wait 횟수 조회
    SQL>      SELECT CLASS, COUNT FROM V$WAITSTAT
    2      WHERE CLASS = 'free list';
    CLASS      COUNT
    free list 12
    (나) 일정 기간동안의 free list에 대한 전체 request 횟수(SUM) 조회
    SQL>      SELECT SUM(VALUE) FROM V$SYSSTAT
    2      WHERE name IN ('db block gets', 'consistent gets');
    SUM (VALUE)
    12050211
    (다) Free 블럭에 대한 wait (COUNT)가 전체 request에 대한 합(SUM)의 1%가 넘는다면 contention을 줄이기 위해
    free list를 추가해 주는 것을 고려해야 한다.
    Table에free list를 추가하기 위해서는 FREELISTS storage 파라미터 값을 늘려 테이블을 재 생성해야 한다.
    이때 FREELISTS 값을 사용자들이 동시에 insert 하는 값에 맞춰 주어야 한다.
    SQL> CREATE TABLE new_emp
    2 STORAGE (FREELISTS 5)
    3 AS SELECT * FROM emp;
    Table created.
    SQL> DROP TABLE emp;
    Table dropped.
    SQL> RENAME new_emp TO emp;
    Table renamed.
    - Rollback Segment에 대한 contention 모니터
    Rollback segment에 대한 contention 문제는 다음과 같은 단계를 통해 조치한다.
    (가) V$WAITSTAT으로 rollback segment에 대한 contention 조사
    SQL> SELECT CLASS, COUNT
    2 FROM V$WAITSTAT
    3 WHERE CLASS IN ('system undo header', 'system undo block','undo header','undo block');
    CLASS COUNT
    system undo header 12
    system undo block 11
    undo header      28
    undo block      6
    (나) 일정기간동안 rollback segmet에 대한 총계(SUM)를 구한다.
    SQL> SELECT SUM(VALUE) FROM V$SYSSTAT
    2 WHERE name IN ('db block gets', 'consistent gets');
    SUM (VALUE)
    12050211
    (다) Rollback segment에 대한 전체 request에 대해 wait된 횟수 (COUNT)가 1%가 넘는다면 CREATE ROLLBACK
    SEGMENT 명령으로 rollback segment를 추가해 준다.
    5) V$FILESTAT, V$DATAFILE을 이용한 I/O에 대한 조회.
    V$FILESTAT과 V$DATAFILE은 시스템 내에 I/O가 많이 발생하는지를 판단할 수 있는 통계 정보를 제공한다.
    (가) 각각의 데이터파일에 대한 read/write 횟수를 데이터베이스 파일명과 함께 조회하기 위해서는 다음과
    같이 한다.
    SQL> SELECT NAME, PHYRDS, PHYWRTS
    2 FROM V$DATAFILE df, V$FILESTAT fs
    3 WHERE df.file# = fs.file#;
    NAME      PHYRDS          PHYWRTS
    /test71/ora_system.dbs      7679      2735
    /test71/ora_system1.dbs      32      546
    (나) 데이터베이스 파일이 아닌 다른 파일에 대한 read/write를 모니터 하기 위해서는 iostat과 같은 O/S
    유틸리티를 활용한다.
    (다) Disk에 I/O가 너무 많이 발생하여, disk를 추가 한 후 overload를 줄여야 할 경우, V$FILESTAT를 활용하여
    통계정보를 분석하여야 한다. Disk I/O에 대한 contention을 최소화 시키기 위해서는 다음과 같은 조치가 취해 질 수 있다.
    a. 데이터파일과 리두로그 파일을 서로 다른 디스크로 나누어 위치시킨다.
    b. 테이블의 데이터가 여러 디스크에 나누어 들어가도록 한다.
    c. 테이블과 인덱스를 서로 다른 디스크로 나눈다.
    d. Oracle 서버와 무관한 disk I/O를 줄인다.
    (라) V$DATAFILE 정보를 활용하여 데이터파일들이 disk I/O에 대한 contention을 줄이기 위해 서로 다른
    디스크에 나누어 들어가야 하는 지를 판단한다. 자주 사용되는 데이터파일들을 서로 다른 disk에 나누어
    두면 데이터를 access할 때 적은 contention이 발생한다.
    ( Disk I/O 한계치를 찾아보기 위해서는 hardware 문서를 참조할 것. 만약 disk contention의 한계치에서 문제가 발생한다.
    예를 들어 초당 40회 이상의 I/O가 발생한다면 대부분의 VMS나 Unix 시스템에서
    처리해 주기 어려운 수치이다.)
    Reference Ducumment
    Oracle8 ops manual.

    Dear Mr.Sanju,
    There are 6 bstat/estat outputs. Which ones do I post?
    Or if you can gimme ur email, I shall send all of them to you....
    Dear MR.SJH,
    Coelescing tablespaces is something that I will do...but there are too many schemas...so how do I identify the important ones...Any pointers please.....
    Regards,
    Sriraman

  • NT 의 SGA TUNING

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-25
    NT의 SGA TUNING
    ================
    PURPOSE
    NT에서의 SGA TUNING하는 법을 알아본다.
    Explanations
    <Library cache tuning>
    SELECT gethitratio, reloads/pins "Reload Ratio"
    FROM V$librarycache
    where namespace='SQL AREA'
    1) 90% 이상 :app program 의 효율성 향상 노력
    2) 1 %초과 : shared_pool_size 증가
    <Library cache 크기 조정>
    SELCT SUM(sharable_mem) "DB Object Size"
    FROM v$db_object_cache;
    SELECT SUM(sharable_mem) "SQL SIZE"     
    FROM v$sqlarea
    WHERE executions>5;
    SELCT SUM(250*users_opening) "Cursor Size"
    FROM v$sqlarea;
    이들의 합 이상으로 할당
    <Dictionary Cache tuning>
    SELECT SUM(gets) "Dictionary Cache Gets",
    SUM(getmisses) "Misses"     ,
    SUM(getmisses)/SUM(gets) *100 ||'%' "Miss Ratio"
    FROM v$rowcache;
    15% 초과이면 shared_pool_size 를 증가 고려
    <Database Buffer Cache tuning>
    SELECT (1-phy.value/(cur.value + con.value))*100 ||’%’
    "Cache Hit Ratio"
    FROM v$sysstat phy,v$sysstat cur v$sysstat con
    WHERE phy.name ='physical reads'
    AND cur.name ='db block gets'
    AND con.name='consistent gets';
    80% 미만이면 DB_BLOCK_BUFFERS 증가
    (RAW device 는 90% 미만)
    Reference Documents
    --------------------

    제품 : ORACLE SERVER
    작성날짜 : 2002-04-25
    NT의 SGA TUNING
    ================
    PURPOSE
    NT에서의 SGA TUNING하는 법을 알아본다.
    Explanations
    <Library cache tuning>
    SELECT gethitratio, reloads/pins "Reload Ratio"
    FROM V$librarycache
    where namespace='SQL AREA'
    1) 90% 이상 :app program 의 효율성 향상 노력
    2) 1 %초과 : shared_pool_size 증가
    <Library cache 크기 조정>
    SELCT SUM(sharable_mem) "DB Object Size"
    FROM v$db_object_cache;
    SELECT SUM(sharable_mem) "SQL SIZE"     
    FROM v$sqlarea
    WHERE executions>5;
    SELCT SUM(250*users_opening) "Cursor Size"
    FROM v$sqlarea;
    이들의 합 이상으로 할당
    <Dictionary Cache tuning>
    SELECT SUM(gets) "Dictionary Cache Gets",
    SUM(getmisses) "Misses"     ,
    SUM(getmisses)/SUM(gets) *100 ||'%' "Miss Ratio"
    FROM v$rowcache;
    15% 초과이면 shared_pool_size 를 증가 고려
    <Database Buffer Cache tuning>
    SELECT (1-phy.value/(cur.value + con.value))*100 ||’%’
    "Cache Hit Ratio"
    FROM v$sysstat phy,v$sysstat cur v$sysstat con
    WHERE phy.name ='physical reads'
    AND cur.name ='db block gets'
    AND con.name='consistent gets';
    80% 미만이면 DB_BLOCK_BUFFERS 증가
    (RAW device 는 90% 미만)
    Reference Documents
    --------------------

  • Can't find null on the table

    Hi Everyone,
    This is so weird.
    I tried to find the sum of revenue_amt in my table. (see SQL step #1)
    It returned a null.
    So i know that i have a row that contains a null in the revenue_amt column.
    I verified using this query.(see SQL step #2)
    But when i look for the row that contains the null. Nothing comes up (see SQL step #3)
    can someone help? this is so weird.
    SQL step #1:
    select sum(revenue_amt) from cf_ft;
    SUM(REVENUE_AMT)
    SQL step #2:
    select sum(revenue_amt) from cf_ft where revenue_amt is not null;
    SUM(REVENUE_AMT)
    759.56
    SQL step #3:
    select * from cf_ft where revenue_amt is null;
    no rows selected

    Basically the result shown is kind of impossible as a sum without where clause cannot come up with a null result if a sum with where clause returns an amount. This is because the aggregate functions ignore null values.
    So i doubt that those results are based on your queries or that the results have been shown within one session that had repeatable reads enabled. A sum may only show null as result if
    a) there are no rows in the table
    b) all rows have null values
    hth

  • How do I calculate disk usage for a table

    Is there a formula or equation that I can use to calculate disk usage (in MB) for a table?
    I would like to foind out what's the best option for initial storage space and monthly growth.

    Hi,
    We will discuss this by taking a simple example:
    SQL>Create table T_REGISTRATION (Name varchar2 (100),
    Fathers_Name varchar2 (100),
    Age number(5),
    Sex char (1),
    Data_of_Birth Date)
    storage (initial 40k next 40k minextents 5)
    Tablespace TBS1;
    Table created.
    SQL> insert into T_REGISTRATION values ('Name_1','Name_1_Father',40,'M','24-FEB-66');
    1 row created.
    SQL> Commit;
    Commit complete
    SQL> analyze table T_REGISTRATION compute statistics;
    Table analyzed.
    SQL> compute sum of blocks on report
    SQL> break on report
    SQL> select extent_id, bytes, blocks
    from user_extents
    where segment_name = 'T_REGISTRATION'
    and segment_type = 'TABLE';
    EXTENT_ID BYTES BLOCKS
    0 65536 8
    1 65536 8
    2 65536 8
    3 65536 8
    Sum 32
    SQL> clear breaks
    SQL> select blocks, empty_blocks,
    avg_space, num_freelist_blocks
    from user_tables
    where table_name = 'T_REGISTRATION';
    BLOCKS EMPTY_BLOCKS AVG_SPACE NUM_FREELIST_BLOCKS
    5 27 8064 0
    From above:-
    1.     We have 32(Sum) blocks allocated to the table
    2.     27 blocks are totally empty
    3.     5 blocks contains data(BLOCKS+EMPTY_BLOCKS = SUM i.e 5+27=32)
    4.     We have an average of about 8064 bytes = 7.8 free on each block used (8064/1024=7.8K From above AVG_SPACE value).
    Therefore our table
    1.     Consumes 5 blocks
    2.     In this 5blocks * 8k blocksize - 5 blocks * 7.8K free = 1k is used for inserted data.
    Source for this example :- asktom.oracle.com
    Thanks,
    Sankar

Maybe you are looking for

  • JDeveloper doesn't show all files in the directory

    Hello, I think some basic functionalities are missing in JDeveloper, one of these basic functionalities is the importing of files. You can't import existing business components in your project, so you have to put them in a directory physically. I did

  • Hi, i cannot publish my iWeb site with  a non mac domaine name...can anyone help please?

    hi, i cannot publish my iWeb site with  a non mac domaine name...can anyone help please?

  • Portal user password reset!

    I for some reason I reset the Portal (DB Schema) user password and i'm not able to get into the portal page! please help.. here is the err i get --- http://<host>:7778/pls/portal/ Forbidden You don't have permission to access /pls/portal/portal.home

  • Usage of TREX  for SAP CRM

    Hi Admins, Can some one throw some light on usage of TREX in the CRM space.  The specific questions i have are : 1. I know its optional but  which module of CRM needs it more - Please confirm  Regards, Rakesh Neni.

  • Is it possible to get oracle procedure name from an ORA-06576

    I got a 'ORA-06576: not a valid function or procedure name' exception from java, is there any generic way to get the actual procedure name once this exception happens? Edited by: qjvictor on Aug 27, 2012 9:08 AM