Analytic Function partition by
I am almost where I want to be with this but lat steps are always killer.
I have a query that returns a list of ETL mappings/packages. The Source package (first one) can be identified with an ‘@’ in the string and all packages related to that mapping will follow (sorted by time).
I am stuck trying to propogate the name of this Source Mapping to all the packages below it until a change to the next Source.
Here’s an example of what I want
Source Grp Name
“sales@source” 1 Sales
salesa 0 Sales
slsb 0 Sales
“orders@hth” 1 Orders
ordersa 0 Orders
ordb 0 Orders
ordersc 0 Orders
“inventory@... “ 1 Inventory
So my sql is basically showing how I am using Lag but it’s coming up short since I have to pre-determine the offset, which won’t work in this case.
SELECT source,
CASE WHEN source LIKE '%@%' THEN 1 ELSE 0 END GRP,
CASE WHEN
(CASE WHEN source LIKE '%@%' THEN 1 ELSE 0 END) =1
THEN substr (map_primary_source, 2, (instr(map_primary_source, '@')-3))
ELSE
lag(substr (map_primary_source, 2, (instr(map_primary_source, '@')-3)), 1,0)
over (ORDER BY creation_date)
END NAME,
creation_date
FROM table
WHERE creation_date > SYSDATE -1
ORDER BY creation_date
I’ve been trying different flavors of analytical functions (first value, over partition by, etc) but am coming up short.
By the way I am on 8.x
Yes, it works for variable numbers of rows.
Here is how to make your grp column and the solution all in one:
CREATE TABLE test AS (
SELECT 'THIS@DESC' description, 'SALES' group_name FROM DUAL
UNION
SELECT 'THISa' description, 'SALES' group_name FROM DUAL
UNION
SELECT 'THISb' description, 'SALES' group_name FROM DUAL
UNION
SELECT 'THISc' description, 'SALES' group_name FROM DUAL
UNION
SELECT 'THISd' description, 'SALES' group_name FROM DUAL
UNION
SELECT 'THISe' description, 'SALES' group_name FROM DUAL
UNION
SELECT 'DEMO@DESC' description, 'DEMO' group_name FROM DUAL
UNION
SELECT 'DEMOa' description, 'DEMO' group_name FROM DUAL
UNION
SELECT 'DEMOb' description, 'DEMO' group_name FROM DUAL
UNION
SELECT 'THAT@DESC' description, 'THAT' group_name FROM DUAL
UNION
SELECT 'THATa' description, 'THAT' group_name FROM DUAL
UNION
SELECT 'THATb' description, 'THAT' group_name FROM DUAL
UNION
SELECT 'THATc' description, 'THAT' group_name FROM DUAL
UNION
SELECT 'WHERE@DESC' description, 'WHERE' group_name FROM DUAL
UNION
SELECT 'WHEREa' description, 'WHERE' group_name FROM DUAL
Table created.
SELECT * FROM test
DESCRIPTIO GROUP
DEMO@DESC DEMO
DEMOa DEMO
DEMOb DEMO
THAT@DESC THAT
THATa THAT
THATb THAT
THATc THAT
THIS@DESC SALES
THISa SALES
THISb SALES
THISc SALES
THISd SALES
THISe SALES
WHERE@DESC WHERE
WHEREa WHERE
SELECT
FIRST_VALUE(description) OVER (PARTITION BY group_name ORDER BY group_num DESC) description,
group_name
FROM
SELECT
description,
CASE
WHEN
INSTR(description, '@') > 0
THEN
1
ELSE
0
END group_num,
group_name
FROM
test
DESCRIPTIO GROUP
DEMO@DESC DEMO
DEMO@DESC DEMO
DEMO@DESC DEMO
THIS@DESC SALES
THIS@DESC SALES
THIS@DESC SALES
THIS@DESC SALES
THIS@DESC SALES
THIS@DESC SALES
THAT@DESC THAT
THAT@DESC THAT
THAT@DESC THAT
THAT@DESC THAT
WHERE@DESC WHERE
WHERE@DESC WHERE
Similar Messages
-
Case Statement in Analytic Function SUM(n) OVER(PARTITION BY x)
Hi Guys,
I have the following SQL that doesn't seem to consider the When clause I am using in the case staement inside the analytic function(SUM). Could somebody let me know why? and suggest the solution?
Select SUM(Case When (A.Flag = 'B' and B.Status != 'C') Then (NVL(A.Amount_Cr, 0) - (NVL(A.Amount_Dr,0))) Else 0 End) OVER (PARTITION BY A.Period_Year) Annual_amount
, A.period_year
, B.status
, A.Flag
from A, B, C
where A.period_year = 2006
and C.Account = '301010'
--and B.STATUS != 'C'
--and A.Flag = 'B'
and A.Col_x = B.Col_x
and A.Col_y = C.Col_y
When I use this SQL, I get
Annual_Amount Period_Year Status Flag
5721017.5 --------- 2006 ---------- C -------- B
5721017.5 --------- 2006 ---------- O -------- B
5721017.5 --------- 2006 ---------- NULL ----- A
And when I put the conditions in the where clause, I get
Annual_Amount Period_Year Status Flag
5721017.5 ---------- 2006 ---------- O -------- BHere are some scripts,
create table testtable1 ( ColxID number(10), ColyID number(10) , Periodname varchar2(15), Flag varchar2(1), Periodyear number(15), debit number, credit number)
insert into testtable1 values(1, 1000, 'JAN-06', 'A', 2006, 7555523.71, 7647668)
insert into testtable1 values(2, 1001, 'FEB-06', 'B', 2006, 112710, 156047)
insert into testtable1 values(3, 1002, 'MAR-06', 'A', 2006, 200.57, 22376.43)
insert into testtable1 values(4, 1003, 'APR-06', 'B', 2006, 0, 53846)
insert into testtable1 values(5, 1004, 'MAY-06', 'A', 2006, 6349227.19, 6650278.03)
create table testtable2 ( ColxID number(10), Account number(10))
insert into testtable2 values(1, 300100)
insert into testtable2 values(2, 300200)
insert into testtable2 values(3, 300300)
insert into testtable2 values(4, 300400)
insert into testtable2 values(5, 300500)
create table apps.testtable3 ( ColyID number(10), Status varchar2(1))
insert into testtable3 values(1000, 'C')
insert into testtable3 values(1001, 'O')
insert into testtable3 values(1002, 'C')
My SQL:
select t1.periodyear
, SUM(Case When (t1.Flag = 'B' and t3.Status != 'C') Then (NVL(t1.credit, 0) - (NVL(t1.debit,0))) Else 0 End) OVER (PARTITION BY t1.PeriodYear)
Annual_amount
, t1.flag
, t3.status
, t2.account
from testtable1 t1, testtable2 t2, testtable3 t3
where t1.colxid = t2.colxid
and t1.colyid = t3.colyid(+)
--and t1.Flag = 'B' and t3.Status != 'C'
Result:
PeriodYear ----- AnnualAmount ----- Flag ----- Status ----- Account
2006 ------------------ 43337 --------------- A ----------- C ---------- 300100
2006 ------------------ 43337 --------------- B ----------- O ---------- 300200
2006 ------------------ 43337 --------------- A ----------- C ---------- 300300
2006 ------------------ 43337 --------------- B ------------ ----------- 300400
2006 ------------------ 43337 --------------- A ------------ ----------- 300500
With condition "t1.Flag = 'B' and t3.Status != 'C'" in where clause instead of in Case statement, Result is (which is desired)
PeriodYear ----- AnnualAmount ----- Flag ----- Status ----- Account
2006 ------------------ 43337 --------------- B ----------- O ---------- 300200 -
Analytical function SUM() OVER (PARTITION BY ) in Crosstab
I am trying to resolve this from a very long time. I have an amount column that has to be grouped on Year, but all the other columns grouped by month. I am trying to achieve this using analytic function SUM(Case when (Condition1 and Condition2) then Sum(Amount) else 0 end) OVER ( PARTITION BY Account, Year), Where Account, Sub Account are the left axis columns. Now, column displays the values correctly, but at different rows. This is confusing.............
For Ex: For Account 00001, there are 3 sub accounts 1000,2000,3000. For Sub account 3000, conditions 1 and 2 are satisfied, so it should display the Amount in the row corresponding to Sub account 3000, and 0 for remaining Sub Accounts. And the Total amount of all the sub accounts, which will be the same as amount for SubAccount 3000 should be displayed in the row corresponding to Account 00001.
But I get blank rows for 1000 and 3000 Sub accounts and Amount displayed in 2000 Sub account, and blank for Account 00001 also.
When I created the same workbook in Tabular form, the same amount is displayed for all the SubAccounts of a single Account.
When I used this CASE statement in TOAD, I figured that this is due to the Analytic function. When I use a group by clause as shown below instead of partition by, I get the results I need.
SELECT (Case when (Condition1 and Condition2) then Sum(Amount) else 0 end), Account, Sub Account FROM tables WHERE conditions GROUP BY Year, Account, Sub Account
But I cannot use groupby for whole SQL of the workbook as I need the other columns with page item 'MONTH' not 'Year'.
Could somebody please help me with this?Hi,
In your tabular form do you get the correct total display against all you subaccounts and account? If this correct then you can use case to ensure that the total is displayed only for the single account.
Once you have the correct totals working in a tabular form it is easier to re-produce what you want in a cross-tab.
Rod West -
Reports 6i and analytical function
hi
I have this query which wrks fine in TOAD
SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
rvt.transaction_date srv_date, inv.segment1 item_no,
rvt.item_desc item_description, hrov.NAME,
( SUBSTR (v.standard_industry_class, 1, 1)
|| '-'
|| po_headers.segment1
|| '-'
|| TO_CHAR (po_headers.creation_date, 'RRRR')
) po_no,
po_headers.creation_date_disp po_date,
( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty
)aMOUNT ,
----Analytic function used here
SUM( ( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,
(SELECT SUM (mot.on_hand)
FROM mtl_onhand_total_mwb_v mot
WHERE inv.inventory_item_id = mot.inventory_item_id
-- AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
AND loc.inventory_location_id = mot.locator_id
AND loc.organization_id = mot.organization_id
AND rvt.locator_id = mot.locator_id) onhand
FROM rcv_vrc_txs_v rvt,
mtl_system_items_b inv,
mtl_item_locations loc,
hr_organization_units_v hrov,
po_headers_v po_headers,
ap_vendors_v v,
po_lines_v po_lines
WHERE inv.inventory_item_id(+) = rvt.item_id
AND po_headers.vendor_id = v.vendor_id
AND rvt.po_line_id = po_lines.po_line_id
AND rvt.po_header_id = po_lines.po_header_id
AND rvt.po_header_id = po_headers.po_header_id
AND rvt.supplier_id = v.vendor_id
AND inv.organization_id = hrov.organization_id
AND rvt.transaction_type = 'DELIVER'
AND rvt.inspection_status_code <> 'REJECTED'
AND rvt.organization_id = inv.organization_id(+)
AND to_char(to_date(rvt.transaction_date, 'DD/MM/YYYY'), 'DD-MON-YYYY') BETWEEN (:p_from_date)
AND NVL (:p_to_date,
:p_from_date
AND rvt.locator_id = loc.physical_location_id(+)
AND transaction_id NOT IN (
SELECT parent_transaction_id
FROM rcv_vrc_txs_v rvtd
WHERE rvt.item_id = rvtd.item_id
AND rvtd.transaction_type IN
('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
GROUP BY rvt.receipt_num , rvt.supplier ,
rvt.transaction_date , inv.segment1 ,
rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qty
but it gives blank page in reports 6i
could it be that reports 6i donot support analytical functionskindly guide another alternaive
thanking in advance
Edited by: makdutakdu on Mar 25, 2012 2:22 PMhi
will the view be like
create view S_Amount as SELECT rvt.receipt_num srv_no, rvt.supplier supplier,
rvt.transaction_date srv_date, inv.segment1 item_no,
rvt.item_desc item_description, hrov.NAME,
( SUBSTR (v.standard_industry_class, 1, 1)
|| '-'
|| po_headers.segment1
|| '-'
|| TO_CHAR (po_headers.creation_date, 'RRRR')
) po_no,
po_headers.creation_date_disp po_date,
( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty
)aMOUNT ,
----Analytic function used here
SUM( ( (rvt.currency_conversion_rate * po_lines.unit_price)
* rvt.transact_qty)) over(partition by hrov.NAME) SUM_AMOUNT,
(SELECT SUM (mot.on_hand)
FROM mtl_onhand_total_mwb_v mot
WHERE inv.inventory_item_id = mot.inventory_item_id
-- AND INV.ORGANIZATION_ID=MOT.ORGANIZATION_ID
AND loc.inventory_location_id = mot.locator_id
AND loc.organization_id = mot.organization_id
AND rvt.locator_id = mot.locator_id) onhand
FROM rcv_vrc_txs_v rvt,
mtl_system_items_b inv,
mtl_item_locations loc,
hr_organization_units_v hrov,
po_headers_v po_headers,
ap_vendors_v v,
po_lines_v po_lines
WHERE inv.inventory_item_id(+) = rvt.item_id
AND po_headers.vendor_id = v.vendor_id
AND rvt.po_line_id = po_lines.po_line_id
AND rvt.po_header_id = po_lines.po_header_id
AND rvt.po_header_id = po_headers.po_header_id
AND rvt.supplier_id = v.vendor_id
AND inv.organization_id = hrov.organization_id
AND rvt.transaction_type = 'DELIVER'
AND rvt.inspection_status_code <> 'REJECTED'
AND rvt.organization_id = inv.organization_id(+)
AND rvt.locator_id = loc.physical_location_id(+)
AND transaction_id NOT IN (
SELECT parent_transaction_id
FROM rcv_vrc_txs_v rvtd
WHERE rvt.item_id = rvtd.item_id
AND rvtd.transaction_type IN
('RETURN TO RECEIVING', 'RETURN TO VENDOR'))
GROUP BY rvt.receipt_num , rvt.supplier ,
rvt.transaction_date , inv.segment1 ,
rvt.item_desc , hrov.NAME,v.standard_industry_clasS,po_headers.segment1,po_headers.creation_datE,
po_headers.creation_date_disp,inv.inventory_item_iD,loc.inventory_location_id,loc.organization_id,
rvt.locator_iD,rvt.currency_conversion_rate,po_lines.unit_price, rvt.transact_qtyis this correct ? i mean i have not included the bind parameters in the view ..moreover shoud this view be joined with all the columns in the from clause of the original query?
kindly guide
thanking in advance -
Aggregation of analytic functions not allowed
Hi all, I have a calculated field called Calculation1 with the following calculation:
AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report #7 COMPL".Resource Name )
The result of this calculation is correct, but is repeated for all the rows I have in the dataset.
Group Name Resourse name Calculation1
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
SH Group Mr. A 10
5112 rowsI tried to create another calculation in order to have only ONE value for the couple "Group Name, Resource Name) as AVG(Calculation1) but I have the error: Aggregation of analytic functions not allowed
I saw also inside the "Edit worksheet" panel that the Calculation1 *is not represented* with the "Sigma" symbol I(as for example a simple AVG(field_1)) and inside the SQL code I don't have GROUP BY Group Name, Resource Name......
I'd like to see ONLY one row as:
Group Name Resourse name Calculation1
SH Group Mr. A 10....that it means I grouped by Group Name, Resource Name
Anyone knows how can I achieve this result or any workarounds ??
Thanks in advance
AlexHi Rod unfortunately I can't use the AVG(Resolution_time) because my dataset is quite strange...I explain to you better.
Ι start from this situation:
!http://www.freeimagehosting.net/uploads/6c7bba26bd.jpg!
There are 3 calculated fields:
RANK is the first calculated field:
ROW_NUMBER() OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name,"Tickets Report Assigned To & Created By COMPL".Incident Id ORDER BY "Tickets Report Assigned To & Created By COMPL".Select Flag )
RT Calc is the 2nd calculation:
CASE WHEN RANK = 1 THEN Resolution_time END
and Calculation2 is the 3rd calculation:
AVG(Resolution_time) KEEP(DENSE_RANK FIRST ORDER BY RANK ) OVER(PARTITION BY "User's Groups COMPL".Group Name,"Tickets Report Assigned To & Created By COMPL".Resource Name )
As you can see, from the initial dataset, I have duplicated incident id and a simple AVG(Resolution Time) counts also all the duplication.
I used the rank (based on the field "flag) to take, for each ticket, ONLY a "resolution time" value (in my case I need the resolution time when the rank =1)
So, with the Calculation2 I calculated for each couple Group Name, Resource Name the right AVG(Resolution time), but how yuo can see....this result is duplicated for each incident_id....
What I need instead is to see *once* for each couple 'Group Name, Resource Name' the AVG(Resolution time).
In other words I need to calculate the AVG(Resolution time) considering only the values written inside the RT Calc fields (where they are NOT NULL, and so, the total of the tickets it's not 14, but 9).
I tried to aggregate again using AVG(Calculation2)...but I had the error "Aggregation of analytic functions not allowed"...
Do you know a way to fix this problem ?
Thanks
Alex -
Analytic function to count rows based on Special criteria
Hi
I have the following query with analytic function but wrong results on the last column COUNT.
Please help me to achive the required result.Need to change the way how I select the last column.
1)I am getting the output order by b.sequence_no column . This is a must.
2)COUNT Column :
I don't want the total count based on thor column hence there is no point in grouping by that column.
The actual requirement to achieve COUNT is:
2a -If in the next row, if either the THOR and LOC combination changes to a new value, then COUNT=1
(In other words, if it is different from the following row)
2b-If the values of THOR and LOC repeats in the following row, then the count should be the total of all those same value rows until the rows become different.
(In this case 2b-WHERE THE ROWS ARE SAME- also I only want to show these same rows only once. This is shown in the "MY REQUIRED OUTPUT) .
My present query:
select r.name REGION ,
p.name PT,
do.name DELOFF,
ro.name ROUTE,
decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
THOR,
l.name LOC ,
b.sequence_no SEQ,
CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
OVER (order by b.sequence_no)
or th.thorfare_name = LEAD (th.thorfare_name)
OVER (order by b.sequence_no)
THEN COUNT(b.sequence_no) OVER (partition by r.name,th.thorfare_name,l.name order BY b.sequence_no
ELSE 1
END COUNT
from t_regions r,t_post_towns p,t_delivery_offices do, t_routes ro, t_counties c,t_head_offices ho,
t_buildings b,t_thoroughfares th,t_localities l
where th.thorfare_id = b.thorfare_id
and nvl(b.invalid,'N')='N'
and b.route_id=ro.route_id(+)
and b.locality_id =l.locality_id(+)
and ro.delivery_office_id=do.delivery_office_id(+)
and do.post_town_id = p.post_town_id(+)
and p.ho_id=ho.ho_id(+)
and ho.county_id = c.county_id(+)
and c.region_id = r.region_id(+)
and r.name='NAAS'
and do.DELIVERY_OFFICE_id= &&DELIVERY_OFFICE_id
and ro.route_id=3405
group by r.name,p.name,do.name,ro.name,th.thorfare_name,l.name,b.sequence_no
ORDER BY ro.name,b.sequence_no;My incorrect output[PART OF DATA]:
>
REGION PT DELOFF ROUTE THOR LOC SEQ COUNT
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 1 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 2 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 4 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 5 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 THEGROVE CEL 2 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 7 3
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 8 4
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 9 5
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 10 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 11 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 12 8
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 15 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 19 3
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 24 4
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 29 5
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 34 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 39 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 42 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 43 2
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 44 3
My required output[PART OF DATA]-Please compare with the above.:
>
REGION PT DELOFF ROUTE THOR LOC COUNT
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 THEGROVE CEL 1
NAAS NAAS MAYNOOTH MAYNOOTHR010 NEWTOWNRD CEL 6
NAAS NAAS MAYNOOTH MAYNOOTHR010 DUBLINRD CEL 7
NAAS NAAS MAYNOOTH MAYNOOTHR010 PRIMHILL CEL 2
NOTE :Count as 1 is correctly coming.
But where there is same rows and I want to take the total count on them, I am not getting.
Pls pls help.
Thanks
Edited by: Krithi on 04-Nov-2010 05:28Nicosa wrote:
Hi,
Can you give us some sample data (create table + inserts orders) to play with ?
Considering your output, I'm not even sure you need analytic count.Yes sure.
I am describing the query again here with 3 tables now to make this understand better.
Given below are the create table statements and insert statements for these 3 tables.
These tables are - BULDINGSV,THORV and LOCV
CREATE TABLE BUILDINGSV
BUILDING_ID NUMBER(10) NOT NULL,
INVALID VARCHAR2(1 BYTE),
ROUTE_ID NUMBER(10),
LOCALITY_ID NUMBER(10),
SEQUENCE_NO NUMBER(4),
THORFARE_ID NUMBER(10) NOT NULL
CREATE TABLE THORV
THORFARE_ID NUMBER(10) NOT NULL,
THORFARE_NAME VARCHAR2(40 BYTE) NOT NULL
CREATE TABLE LOCV
LOCALITY_ID NUMBER(10) NOT NULL,
NAME VARCHAR2(40 BYTE) NOT NULL);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002372, 'N', 3405, 37382613, 5, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002363, 'N', 3405, 37382613, 57, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002362, 'N', 3405, 37382613, 56, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002360, 'N', 3405, 37382613, 52, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002358, 'N', 3405, 37382613, 1, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002240, 'N', 3405, 37382613, 6, 9002284);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002229, 'N', 3405, 37382613, 66, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002228, 'N', 3405, 37382613, 65, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002226, 'N', 3405, 37382613, 62, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002222, 'N', 3405, 37382613, 43, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002217, 'N', 3405, 37382613, 125, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002221, 'N', 3405, 37382613, 58, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002214, 'N', 3405, 37382613, 128, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33363182, 'N', 3405, 37382613, 114, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33363185, 'N', 3405, 37382613, 115, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002371, 'N', 3405, 37382613, 2, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003329, 'N', 3405, 37382613, 415, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002359, 'N', 3405, 37382613, 15, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002224, 'N', 3405, 37382613, 61, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003318, 'N', 3405, 37382613, 411, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003326, 'N', 3405, 37382613, 412, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003327, 'N', 3405, 37382613, 413, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003328, 'N', 3405, 37382613, 414, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003330, 'N', 3405, 37382613, 416, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003331, 'N', 3405, 37382613, 417, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27003332, 'N', 3405, 37382613, 410, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27004795, 'N', 3405, 37382613, 514, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(27004807, 'N', 3405, 37382613, 515, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(59002227, 'N', 3405, 37382613, 64, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33230805, 'N', 3405, 37382613, 44, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231027, 'N', 3405, 37382613, 7, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231058, 'N', 3405, 37382613, 9, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231078, 'N', 3405, 37382613, 10, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231087, 'N', 3405, 37382613, 11, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33231093, 'N', 3405, 37382613, 12, 9002375);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(33229890, 'N', 3405, 37382613, 55, 9002364);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561996, 'N', 3405, 34224751, 544, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561997, 'N', 3405, 34224751, 543, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561998, 'N', 3405, 34224751, 555, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562000, 'N', 3405, 34224751, 541, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562001, 'N', 3405, 34224751, 538, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562028, 'N', 3405, 35417256, 525, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562031, 'N', 3405, 35417256, 518, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562032, 'N', 3405, 35417256, 519, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562033, 'N', 3405, 35417256, 523, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561939, 'N', 3405, 34224751, 551, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561940, 'N', 3405, 34224751, 552, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561941, 'N', 3405, 34224751, 553, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561942, 'N', 3405, 35417256, 536, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561943, 'N', 3405, 35417256, 537, 0);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561970, 'N', 3405, 35417256, 522, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561972, 'N', 3405, 35417256, 527, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561974, 'N', 3405, 35417256, 530, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561975, 'N', 3405, 35417256, 531, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561980, 'N', 3405, 34224751, 575, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561981, 'N', 3405, 34224751, 574, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561983, 'N', 3405, 34224751, 571, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561984, 'N', 3405, 34224751, 570, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561985, 'N', 3405, 34224751, 568, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561986, 'N', 3405, 34224751, 567, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561987, 'N', 3405, 34224751, 566, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561989, 'N', 3405, 34224751, 563, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561990, 'N', 3405, 34224751, 562, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561991, 'N', 3405, 34224751, 560, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561992, 'N', 3405, 34224751, 559, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561993, 'N', 3405, 34224751, 558, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561994, 'N', 3405, 34224751, 548, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80561995, 'N', 3405, 34224751, 546, 35417360);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562160, 'N', 3405, 37382613, 139, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562161, 'N', 3405, 37382613, 140, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562162, 'N', 3405, 37382613, 141, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562163, 'N', 3405, 37382613, 142, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562164, 'N', 3405, 37382613, 143, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562165, 'N', 3405, 37382613, 145, 35291878);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562166, 'N', 3405, 37382613, 100, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562167, 'N', 3405, 37382613, 102, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562171, 'N', 3405, 37382613, 107, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562172, 'N', 3405, 37382613, 108, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562174, 'N', 3405, 37382613, 110, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562175, 'N', 3405, 37382613, 111, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562176, 'N', 3405, 37382613, 112, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562177, 'N', 3405, 37382613, 113, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562182, 'N', 3405, 37382613, 123, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562183, 'N', 3405, 37382613, 121, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562184, 'N', 3405, 37382613, 120, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562185, 'N', 3405, 37382613, 118, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562186, 'N', 3405, 37382613, 117, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562187, 'N', 3405, 37382613, 116, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562189, 'N', 3405, 37382613, 95, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562190, 'N', 3405, 37382613, 94, 35291883);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562213, 'N', 3405, 37382613, 89, 35291872);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(80562240, 'N', 3405, 35417256, 516, 35417271);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329559, 'N', 3405, 35329152, 443, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329560, 'N', 3405, 35329152, 444, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329562, 'N', 3405, 35329152, 446, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329109, 'N', 3405, 35329152, 433, 35329181);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329169, 'N', 3405, 35329152, 434, 35329181);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329557, 'N', 3405, 35329152, 441, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329558, 'N', 3405, 35329152, 442, 35329551);
Insert into BUILDINGSV
(BUILDING_ID, INVALID, ROUTE_ID, LOCALITY_ID, SEQUENCE_NO, THORFARE_ID)
Values
(35329191, 'N', 3405, 35329152, 436, 35329181);
COMMIT;
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(0, 'OSIUNKNOWN');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002284, 'THE GROVE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002364, 'DUBLIN ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(9002375, 'NEWTOWN ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291872, 'HAZELHATCH ROAD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291878, 'SIMMONSTOWN PARK');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35291883, 'PRIMROSE HILL');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329181, 'THE COPSE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329213, 'THE COURT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329529, 'THE CRESCENT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329551, 'THE LAWNS');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35329580, 'THE DRIVE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35417271, 'TEMPLEMILLS COTTAGES');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(35417360, 'CHELMSFORD');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(36500023, 'THE CLOSE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(36500101, 'THE GREEN');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375569, 'THE DOWNS');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375595, 'THE PARK');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375754, 'THE AVENUE');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37375781, 'THE VIEW');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37376046, 'THE CRESCENT');
Insert into THORV
(THORFARE_ID, THORFARE_NAME)
Values
(37376048, 'THE GLADE');
COMMIT;
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(34224751, 'SIMMONSTOWN');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(35417256, 'TEMPLEMILLS');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(35329152, 'TEMPLE MANOR');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(37382613, 'CELBRIDGE');
Insert into LOCV
(LOCALITY_ID, NAME)
Values
(37375570, 'SAINT WOLSTAN''S ABBEY');
COMMIT;
------------------------------------------------------------------------------Now the query with wrong result:
select decode(th.thorfare_name,'OSIUNKNOWN',NULL,th.thorfare_name)
THOR,
l.name LOC,
b.sequence_no SEQ,
CASE WHEN th.thorfare_name = LAG (th.thorfare_name)
OVER (order by b.sequence_no)
or th.thorfare_name = LEAD (th.thorfare_name)
OVER (order by b.sequence_no)
THEN COUNT(b.sequence_no) OVER (partition by th.thorfare_name,l.name order BY b.sequence_no
ELSE 1
END COUNT from BUILDINGSV b,THORV th,LOCV l
where th.thorfare_id = b.thorfare_id
and nvl(b.invalid,'N')='N'
and b.route_id=3405
and b.locality_id =l.locality_id(+)
order by b.sequence_no;The query result -WRONG (only first few lines)
THOR LOC SEQ COUNT
DUBLIN ROAD CELBRIDGE 1 1
NEWTOWN ROAD CELBRIDGE 2 1
NEWTOWN ROAD CELBRIDGE 5 2
THE GROVE CELBRIDGE 6 1
NEWTOWN ROAD CELBRIDGE 7 3
NEWTOWN ROAD CELBRIDGE 9 4
NEWTOWN ROAD CELBRIDGE 10 5
NEWTOWN ROAD CELBRIDGE 11 6
NEWTOWN ROAD CELBRIDGE 12 7
DUBLIN ROAD CELBRIDGE 15 1
PRIMROSE HILL CELBRIDGE 43 1
PRIMROSE HILL CELBRIDGE 44 2
DUBLIN ROAD CELBRIDGE 52 3
DUBLIN ROAD CELBRIDGE 55 4
DUBLIN ROAD CELBRIDGE 56 5
DUBLIN ROAD CELBRIDGE 57 6
DUBLIN ROAD CELBRIDGE 58 7
PRIMROSE HILL CELBRIDGE 61 3
PRIMROSE HILL CELBRIDGE 62 4
HAZELHATCH ROAD CELBRIDGE 64 1
HAZELHATCH ROAD CELBRIDGE 65 2The query result -EXPECTED (only first few lines)
THOR LOC COUNT
DUBLIN ROAD CELBRIDGE 1
NEWTOWN ROAD CELBRIDGE 2
THE GROVE CELBRIDGE 1
NEWTOWN ROAD CELBRIDGE 5
DUBLIN ROAD CELBRIDGE 1
PRIMROSE HILL CELBRIDGE 2
DUBLIN ROAD CELBRIDGE 5
PRIMROSE HILL CELBRIDGE 2
HAZELHATCH ROAD CELBRIDGE 2Please note, in the expected result, I only need 1 row but need to show the total count of rows until the names change.
So the issues are
1) the count column values are wrong in my query.
2)I dont want to repeat the same rows(Please see EXPECTED output and compare it against the original)
3)Want the output in exactly same way as in EXPECTED OUTPUT as I dont want to group by thor name(Eg. I dont want the count for all DUBLIN ROAD but I want to examine rows for the next one, if THOR/LOC combination is different in next row then COUNT=1 else COUNT=Count of no. of rows for that thor/loc combination until the combination change -So there are same value multiple rows which i need to show it in 1 row with the total count)
I am explaining below this in more detail!!
I only need 1 row per same THOR/LOC names coming multiple times but I need the count shown against that 1 row(i.e COUNT= how many rows with same thor/loc combination until THOR/LOC combo changes value).
Then repeat the process until all rows are finished..
If there is no multiple row with same THOR/LOC coming in the following row-i.e the following row is a different THOR/LOC combination, then the count for that row is 1.
Hope this is clear.
Is this doable?
Thanks in advance.
Edited by: Krithi on 04-Nov-2010 07:45
Edited by: Krithi on 04-Nov-2010 07:45
Edited by: Krithi on 04-Nov-2010 08:31 -
Completion of data series by analytical function
I have the pleasure of learning the benefits of analytical functions and hope to get some help
The case is as follows:
Different projects gets funds from different sources over several years, but not from each source every year.
I want to produce the cumulative sum of funds for each source for each year for each project, but so far I have not been able to do so for years without fund for a particular source.
I have used this syntax:
SUM(fund) OVER(PARTITION BY project, source ORDER BY year ROWS UNBOUNDED PRECEDING)
I have also experimented with different variations of the window clause, but without any luck.
This is the last step in a big job I have been working on for several weeks, so I would be very thankful for any help.If you want to use Analytic functions and if you are on 10.1.3.3 version of BI EE then try using Evaluate, Evaluate_aggr that support native database functions. I have blogged about it here http://oraclebizint.wordpress.com/2007/09/10/oracle-bi-ee-10133-support-for-native-database-functions-and-aggregates/. But in your case all you might want to do is have a column with the following function.
SUM(Measure BY Col1, Col2...)
I have also blogged about it here http://oraclebizint.wordpress.com/2007/10/02/oracle-bi-ee-101332-varying-aggregation-based-on-levels-analytic-functions-equivalence/.
Thanks,
Venkat
http://oraclebizint.wordpress.com -
Analytic function to retrieve a value one year ago
Hello,
I'm trying to find an analytic function to get a value on another row by looking on a date with Oracle 11gR2.
I have a table with a date_id (truncated date), a flag and a measure. For each date, I have at least one row (sometimes 2), so it is gapless.
I would like to find analytic functions to show for each date :
sum of the measure for that date
sum of the measure one week ago
sum of the measure one year ago
As it is gapless I managed to do it the week doing a group by date in a subquery and using a LAG with offset set to 7 on top of it (see below).
However I'm struggling on how to do that for the data one year ago as we might have leap years. I cannot simply set the offset to 365.
Is it possible to do it with a RANGE BETWEEN window clause? I can't manage to have it working with dates.
Week :LAG with offset 7
SQL Fiddle
or
create table daily_counts
date_id date,
internal_flag number,
measure1 number
insert into daily_counts values ('01-Jan-2013', 0, 8014);
insert into daily_counts values ('01-Jan-2013', 1, 2);
insert into daily_counts values ('02-Jan-2013', 0, 1300);
insert into daily_counts values ('02-Jan-2013', 1, 37);
insert into daily_counts values ('03-Jan-2013', 0, 19);
insert into daily_counts values ('03-Jan-2013', 1, 14);
insert into daily_counts values ('04-Jan-2013', 0, 3);
insert into daily_counts values ('05-Jan-2013', 0, 0);
insert into daily_counts values ('05-Jan-2013', 1, 1);
insert into daily_counts values ('06-Jan-2013', 0, 0);
insert into daily_counts values ('07-Jan-2013', 1, 3);
insert into daily_counts values ('08-Jan-2013', 0, 33);
insert into daily_counts values ('08-Jan-2013', 1, 9);
commit;
select
date_id,
total1,
LAG(total1, 7) OVER(ORDER BY date_id) total_one_week_ago
from
select
date_id,
SUM(measure1) total1
from daily_counts
group by date_id
order by 1;
Year : no idea?
I can't give a gapless example, would be too long but if there is a solution with the date directly :
SQL Fiddle
or add this to the schema above :
insert into daily_counts values ('07-Jan-2012', 0, 11);
insert into daily_counts values ('07-Jan-2012', 1, 1);
insert into daily_counts values ('08-Jan-2012', 1, 4);
Thank you for your help.
FloydHi,
Sorry, I;m not sure I understand the problem.
If you are certain that there is at least 1 row for every day, then you can be sure that the GROUP BY will produce exactly 1 row per day, and you can use LAG (total1, 365) just like you already use LAG (total1, 7).
Are you concerned about leap years? That is, when the day is March 1, 2016, do you want the total_one_year_ago column to reflect March 1, 2015, which was 366 days earlier? If that case, use
date_id - ADD_MONTHS (date_id, -12)
instead of 365.
LAG only works with an exact number, but you can use RANGE BETWEEN with other analytic functions, such as MIN or SUM:
SELECT DISTINCT
date_id
, SUM (measure1) OVER (PARTITION BY date_id) AS total1
, SUM (measure1) OVER ( ORDER BY date_id
RANGE BETWEEN 7 PRECEDING
AND 7 PRECEDING
) AS total1_one_week_ago
, SUM (measure1) OVER ( ORDER BY date_id
RANGE BETWEEN 365 PRECEDING
AND 365 PRECEDING
) AS total1_one_year_ago
FROM daily_counts
ORDER BY date_id
Again, use date arithmetic instead of the hard-coded 365, if that's an issue.
As Hoek said, it really helps to post the exact results you want from the given sample data. You're miles ahead of the people who don't even post the sample data, though.
You're right not to post hundreds of INSERT statements to get a year's data. Here's one way to generate sample data for lots of rows at the same time:
-- Put a 0 into the table for every day in 2012
INSERT INTO daily_counts (date_id, measure1)
SELECT DATE '2011-12-31' + LEVEL
, 0
FROM dual
CONNECT BY LEVEL <= 366 -
Hi all,
I am using ODI 11g(11.1.1.3.0) and I am trying to make an interface using analytic functions in the column mapping, something like below.
sum(salary) over (partition by .....)
The problem is that when ODI saw sum it assumes this as an aggregate function and puts group by. Is there any way to make ODI understand it is not an aggregate function?
I tried creating an option to specify whether it is analytic or not and updated IKM with no luck.
<%if ( odiRef.getUserExit("ANALYTIC").equals("1") ) { %>
<% } else { %>
<%=odiRef.getGrpBy(i)%>
<%=odiRef.getHaving(i)%>
<% } %>
Thanks in advanceThanks for the reply.
But I think in ODI 11g getFrom() function is behaving differently, that is why it is not working.
When I check out the A.2.18 getFrom() Method from Substitution API Reference document, it says
Allows the retrieval of the SQL string of the FROM in the source SELECT clause for a given dataset. The FROM statement is built from tables and joins (and according to the SQL capabilities of the technologies) that are used in this dataset.
I think getfrom also retrieves group by clause, I create a step in IKM just with *<%=odiRef.getFrom(0)%>* and I can see that even that query generated has a group by clause -
Does sql analytic function help to determine continuity in occurences
We need to solve this problem in a sql statement.
imagine a table test with two columns
create table test (id char(1), begin number, end number);
and these values
insert into test('a',1, 2);
insert into test('a',2,3);
insert into test('a',3,4);
insert into test('a',7,10);
insert into test('a',10,15);
insert into test('b',5,9);
insert into test('b',9,21);
insert into test('c',1,5);
our goal is to determine continuity in number sequence between begin and end attributes for a same id and determine min and max number from these contuinity chains.
The result may be
a, 1, 4
a, 7, 15
b, 5, 21
c, 1, 5
We test some analytic functions like lag, lead, row_number, min, max, partition by, etc to search a way to identify row set that represent a continuity but we didn't find a way to identify (mark) them so we can use min and max functions to extract extreme values.
Any idea is really welcome !Here is our implementation in a real context for example:
insert into requesterstage(requesterstage_i, requester_i, t_requesterstage_i, datefrom, dateto )
With ListToAdd as
(Select distinct support.requester_i,
support.datefrom,
support.dateto
from support
where support.datefrom < to_date('01.01.2006', 'dd.mm.yyyy')
and support.t_relief_i = t_relief_ipar.fgetflextypologyclassitem_i(t_relief_ipar.fismedicalexpenses)
and not exists
(select null
from requesterstage
where requesterstage.requester_i = support.requester_i
and support.datefrom < nvl(requesterstage.dateto, support.datefrom + 1)
and nvl(support.dateto, requesterstage.datefrom + 1) > requesterstage.datefrom)
ListToAddAnalyzed_1 as
(select requester_i,
datefrom,
dateto,
decode(datefrom,lag(dateto) over (partition by requester_i order by datefrom),0,1) data_set_start
from ListToAdd),
ListToAddAnalyzed_2 as
(select requester_i,
datefrom,
dateto,
data_set_start,
sum(data_set_start) over(order by requester_i, datefrom ) data_set_id
from ListToAddAnalyzed_1)
select requesterstage_iseq.nextval,
requester_i,
t_requesterstage_ipar.fgetflextypologyclassitem_i(t_requesterstage_ipar.fisbefore2006),
datefrom,
decode(sign(nvl(dateto, to_date('01.01.2006', 'dd.mm.yyyy')) -to_date('01.01.2006', 'dd.mm.yyyy')), 0, to_date('01.01.2006', 'dd.mm.yyyy'), -1, dateto, 1, to_date('01.01.2006', 'dd.mm.yyyy'))
from ( select requester_i
, min(datefrom) datefrom
, max(dateto) dateto
From ListToAddAnalyzed_2
group by requester_i, data_set_id
); -
How to use analytic function with aggregate function
hello
can we use analytic function and aggrgate function in same qurey? i tried to find any example on Net but not get any example how both of these function works together. Any link or example plz share with me
Edited by: Oracle Studnet on Nov 15, 2009 10:29 PMselect
t1.region_name,
t2.division_name,
t3.month,
t3.amount mthly_sales,
max(t3.amount) over (partition by t1.region_name, t2.division_name)
max_mthly_sales
from
region t1,
division t2,
sales t3
where
t1.region_id=t3.region_id
and
t2.division_id=t3.division_id
and
t3.year=2004
Source:http://www.orafusion.com/art_anlytc.htm
Here max (aggregate) and over partition by (analytic) function is in same query. So it means we can use aggregate and analytic function in same query and more than one analytic function in same query also.
Hth
Girish Sharma -
Analytic Functions with GROUP-BY Clause?
I'm just getting acquainted with analytical functions. I like them. I'm having a problem, though. I want to sum up the results, but either I'm running into a limitation or I'm writing the SQL wrong. Any hints for me?
Hypothetical Table SALES consisting of a DAY_ID, PRODUCT_ID, PURCHASER_ID, PURCHASE_PRICE lists all the
Hypothetical Business Question: Product prices can fluctuate over the course of a day. I want to know how much per day I would have made had I sold one each of all my products at their max price for that day. Silly question, I know, but it's the best I could come up with to show the problem.
INSERT INTO SALES VALUES(1,1,1,1.0);
INSERT INTO SALES VALUES(1,1,1,2.0);
INSERT INTO SALES VALUES(1,2,1,3.0);
INSERT INTO SALES VALUES(1,2,1,4.0);
INSERT INTO SALES VALUES(2,1,1,5.0);
INSERT INTO SALES VALUES(2,1,1,6.0);
INSERT INTO SALES VALUES(2,2,1,7.0);
INSERT INTO SALES VALUES(2,2,1,8.0);
COMMIT;
Day 1: Iif I had sold one product 1 at $2 and one product 2 at $4, I would have made 6$.
Day 2: Iif I had sold one product 1 at $6 and one product 2 at $8, I would have made 14$.
The desired result set is:
DAY_ID MY_MEASURE
1 6
1 14The following SQL gets me tantalizingly close:
SELECT DAY_ID,
MAX(PURCHASE_PRICE)
KEEP(DENSE_RANK FIRST ORDER BY PURCHASE_PRICE DESC)
OVER(PARTITION BY DAY_ID, PRODUCT_ID) AS MY_MEASURE
FROM SALES
ORDER BY DAY_ID
DAY_ID MY_MEASURE
1 2
1 2
1 4
1 4
2 6
2 6
2 8
2 8But as you can see, my result set is "longer" than I wanted it to be. I want a single row per DAY_ID. I understand what the analytical functions are doing here, and I acknowledge that I am "not doing it right." I just can't seem to figure out how to make it work.
Trying to do a sum() of max() simply does not work, nor does any semblance of a group-by clause that I can come up with. Unfortunately, as soon as I add the windowing function, I am no longer allowed to use group-by expressions (I think).
I am using a reporting tool, so unfortunately using things like inline views are not an option. I need to be able to define "MY_MEASURE" as something the query tool can apply the SUM() function to in its generated SQL.
(Note: The actual problem is slightly less easy to conceptualize, but solving this conundrum will take me much closer to solving the other.)
I humbly solicit your collective wisdom, oh forum.Thanks, SY. I went that way originally too. Unfortunately that's no different from what I could get without the RANK function.
SELECT DAY_ID,
PRODUCT_ID,
MAX(PURCHASE_PRICE) MAX_PRICE
FROM SALES
GROUP BY DAY_ID,
PRODUCT_ID
ORDER BY DAY_ID,
PRODUCT_ID
DAY_ID PRODUCT_ID MAX_PRICE
1 1 2
1 2 4
2 1 6
2 2 8 -
[b]Using Analytic functions...[/b]
Hi All,
I need help in writing a query using analytic functions.
Foll is my scenario. I have a table cust_points
CREATE TABLE cust_points
( cust_id varchar2(10),
pts_dt date,
reward_points number(3),
bal_points number(3)
insert into cust_points values ('ABC',01-MAY-2004',5, 15)
insert into cust_points values ('ABC',05-MAY-2004',3, 12)
insert into cust_points values ('ABC',09-MAY-2004',3, 9)
insert into cust_points values ('XYZ',02-MAY-2004',8, 4)
insert into cust_points values ('XYZ',03-MAY-2004',5, 1)
insert into cust_points values ('JKL',10-MAY-2004',5, 11)
I want a result set which shows for each customer, the sum of reward his/her points
but balance points as of the last date. So for the above I should have foll results
cust_id reward_pts bal_points
ABC 11 9
XYZ 13 1
JKL 5 11
I having tried using last_value(), for eg
Select cust_id, sum(reward_points), last_value(bal_points) over (partition by cust_id)...but run into grouping errors.
Can anyone help ?try this...
SELECT a.pkcol,
nvl(SUM(b.col1),0) col1,
nvl(SUM(b.col2),0) col2,
nvl(SUM(b.col3),0) col3
FROM table1 a, table2 b, table3 c
WHERE a.pkcol = b.plcol(+)
AND a.pkcol = c.pkcol
GROUP BY a.pkcol;
SQL> select a.deptno,
2 nvl((select sum(sal) from test_emp b where a.deptno = b.deptno),0) col1,
3 nvl((select sum(comm) from test_emp b where a.deptno = b.deptno),0) col2
4 from test_dept a;
DEPTNO COL1 COL2
10 12786 0
20 13237 738
30 11217 2415
40 0 0
99 0 0
SQL> select a.deptno,
2 nvl(sum(b.sal),0) col1,
3 nvl(sum(b.comm),0) col2
4 from test_dept a,test_emp b
5 where a.deptno = b.deptno
6 group by a.deptno;
DEPTNO COL1 COL2
30 11217 2415
20 13237 738
10 12786 0
SQL> select a.deptno,
2 nvl(sum(b.sal),0) col1,
3 nvl(sum(b.comm),0) col2
4 from test_dept a,test_emp b
5 where a.deptno = b.deptno(+)
6 group by a.deptno;
DEPTNO COL1 COL2
10 12786 0
20 13237 738
30 11217 2415
40 0 0
99 0 0
SQL> -
Analytic Functions - Need resultset only in one select
Hello Experts,
Problem Definition: Using Analytic Function, get Total sales for the Product P1 and Customer C1 [Total sales for the customer itself] in one line. I want to restrict the ResultSet of the query to Product P1, please look at the data below, queries and problems..
Data
Customer Product Qtr Sales
C1 P1 19991 100.00
C1 P1 19992 125.00
C1 P1 19993 175.00
C1 P1 19994 300.00
C1 P2 19991 100.00
C1 P2 19992 125.00
C1 P2 19993 175.00
C1 P2 19994 300.00
C2 P1 19991 100.00
C2 P1 19992 125.00
C2 P1 19993 175.00
C2 P1 19994 300.00
Problem, I want to display....
Customer Product ProdSales CustSales
C1 P1 700 1400
But Without using outer query, i.e. please look below for the query that returns this reult with two select, I want this result in one query only..
Select * From ----*** want to avoid this... ***----
(Select Customer,Product,
Sum(Sales) ProdSales,
Sum(Sum(Sales)) Over(Partition By Customer) CustSales
From t1
Where customer='C1')
Where
Product='P1' ;
Also, I want to avoid Hard coding of P1 in the select clause....
I mean, I can do it in one shot/select, but look at the query below, it uses P1 in the select clause, which is No No!! P1 is allowed only in Where or Having ..
Select Customer,Decode(Product, 'P1','P1','P1') Product,
Decode(Product,'P1',Sales,0) ProdSales,
Sum(Sum(Sales)) Over (Partition By Customer ) CustSales
From t1
Where customer='C1' ;
This will get me what I want, but as I said earlier, I want to avoid using P1 in the
Select clause..
Goal is to Avoid using
1-> Two Select/Outer Query/In Line Views
2-> Product 'P1' in the Select clause...No hard coded product name in the select clause and group by clause..
Thanks
-DhavalSelect * From ----*** want to avoid this... ***----
(Select Customer,Product,
Sum(Sales) ProdSales,
Sum(Sum(Sales)) Over(Partition By Customer)
CustSales
From t1
Where customer='C1')
Where
Product='P1' ;
Goal is to Avoid using
1-> Two Select/Outer Query/In Line ViewsWhy? -
Analytical function fine within TOAD but throwing an error for a mapping.
Hi,
When I validate an expression based on SUM .... OVER PARTITION BY in a mapping, I am getting the following error.
Line 4, Col 23:
PLS-00103: Encountered the symbol "OVER" when expecting one of the following:
* & = - + < / > at in is mod remainder not rem then
<an exponent (**)> <> or != or ~= >= <= <> and or like LIKE2_
LIKE4_ LIKEC_ between || multiset member SUBMULTISET_
However, using TOAD, the expression is working fine.
A staging table has got three columns, col1, col2 and col3. The expression is checking for a word in col3. The expression is as under.
(CASE WHEN SUM (CASE WHEN UPPER(INGRP1.col3) LIKE 'some_value%'
THEN 1
ELSE 0
END) OVER (PARTITION BY INGRP1.col1
,INGRP1.col2) > 0
THEN 'Y'
ELSE 'N'
END)
I searched the forum for similar issues, but not able to resolve my issue.
Could you please let me know what's wrong here?
Many thanks,
Manoj.Yes, expression validation in 10g simply does not work for (i.e. does not recognize) analytic functions.
It can simply be ignored. You should also set Generation mode to "Set Based only". Otherwise the mapping will fail to deploy under certain circumstances (when using non-set-based (PL/SQL) operators after the analytic function).
Maybe you are looking for
-
Data Warehousing and Data Analytics
how can someone as a manager, use data warehousing and data analytics in a phone company?
-
Hi Friends, I am trying to update the material document in J1I5 t-code but it is throwing error traffic light. previously i had the same issue but i found that balnce is less in J_2irg1bal table. but now we have 4000 PC in in this table but i am just
-
Quick question about photos on Pre
Is there any way to have a photo displayed on the PRE to also show the file name ie (joescar.jpg). The only data I can see is the album name and photo number (1/23) of the photos in that album. Thanks for any help. Post relates to: Pre p100eww (Sprin
-
Cost of memory vs. speed increase
I use a mac mini at work (see info attached). It currently has 1.25GB of memory and can upgrade to a max of 2GB. The mac is running Windows XP SP2 through VM Fusion 3, as well as Snow Leopard. I have several applications from each platform that I use
-
Porque no puedo añadir un nuevo modulo de seguridad?
En: <Herramientas><Opciones><Avanzado><Certificados><Dispositivos de seguridad> Pincho en la pestaña cargar para añadir un nuevo modulo de seguridad, en la ventana que aparece pongo el nombre que quiero dar al modulo, busco el archivo del modulo en e