Spatial Average

Hi all.
I'm very new to all this spatial business so bare with me.
I need to calculate the average value of an attribute within a region.
I have two tables. One that contains a cell_id, and the geometry column is a polygon that defines the bounding box of the cell. Each cell is approx 25km by 25km.
In another table i have a reading for each cell for each day over a period of a few months.
I need to be able to calculate the average over regions. For example the average for regions with a spatial resolution of 10 degrees.
Does anyone know how i can do this?
I realise i could build a query window and do a seperate query for each region. But that would be very very slow. Any other ideas?
Thanks bec.

I don't think that there is an out-of-the-box solution for your task but if I understandd you correctly the only thing you will have to do is to average the power associated with each 1/n octave band of all your microphones. In the S&V toolkit the band power is an output parameter of e. g. the SVT Third-octave Analysis.vi. The band powers for each channel are stored within arrays so you will simply have to average the corresponding array elements from each of your channels.
Best regards,
Jochen Klier
National Instruments Germany

Similar Messages

  • Average several n-th octave spectra

    I need to average the n-th octave spectra (3rd or 12th) from several (2 to 8) microphones to get a spatial average of the spectrum in the room. Is there a function in Labview that does this or something in the S&V toolkit? Thanks!

    I don't think that there is an out-of-the-box solution for your task but if I understandd you correctly the only thing you will have to do is to average the power associated with each 1/n octave band of all your microphones. In the S&V toolkit the band power is an output parameter of e. g. the SVT Third-octave Analysis.vi. The band powers for each channel are stored within arrays so you will simply have to average the corresponding array elements from each of your channels.
    Best regards,
    Jochen Klier
    National Instruments Germany

  • Oracle Spatial User Conference  - GITA Conference Seattle

    http://www.gita.org/events/annual/31/Oracle.asp
    Oracle Spatial User Conference
    Please note that online registration for this event is now closed.
    Thursday, March 13, 2008
    Sheraton Seattle Hotel
    1400 6th Avenue
    Seattle, Washington USA
    GITA invites you to attend the Oracle Spatial Users Conference. If you are currently a user, solutions provider, or systems integrator who depends upon Oracle’s spatial technologies, or if you want to learn why thousands of organizations use Oracle’s spatial database and application server capabilities, this is one event you won’t want to miss.
    Learn about the latest Oracle geospatial technologies and the business and technical benefits they provide as users, solutions providers and Oracle executives share real world experience with the world's most widely used geospatial information technology platform.
    More details will be posted soon—sign up for e-mail updates today!
    ORACLE SPATIAL USER CONFERENCE AT GITA
    Thursday, March 13, 2008—Seattle, Washington
    Preliminary Agenda
    Please check back for updates in the future. This agenda is subject to change.
    Feb. 12 Update: Complete user sessions schedule and abstracts posted
    Wednesday, March 12
    6:00 – 8:30 p.m.      Oracle Spatial User Conference Reception — Cirrus Ballroom, Sheraton Seattle Hotel
    Open to registered & paid user conference attendees only. Registration will be available at the door.
    Thursday, March 13
    8:00 – 8:30 a.m.
    Oracle Spatial Special Interest Group Meeting
    8:30 – 9:00 a.m.      Welcome – Oracle
    9:00 – 10:30 a.m.
    Maps in Business Solutions and Applications (Jayant Sharma)
    * Fusion Middleware and BI
    * OGC Web Services
    * Work and Asset Management
    * Mobile Workforce Management
    10:30 – 11:00 a.m.
    Break
    11:00 a.m. – Noon
    Oracle Spatial 11g – Technical Overview (Siva Ravada)
    * What’s Better?
    * What’s New?
    * What Would You Like To See?
    12:00 – 1:30 p.m.
    Award Luncheon
    1:30 – 3:00 p.m.
    TECHNICAL USE CASES – USER SESSIONS
    Track A
    Mapping & Business Intelligence Applications in Insurance and Retail
    Audatex Insight: Claims Analytics with Oracle Business Intelligence Enterprise Edition and Oracle MapViewer
    Yasser Kanoun, Principal Consultant, KPI Partners
    Sally Suico, Audatex
    Audatex Insight is a claim analytics application that presents automobile claims data in graphical and geographical views for management decision support.
    This presentation describes how the integration of Oracle MapViewer with OBIEE dashboards allowed Audatex to display claim analytics geographically. For instance, a user can view the average cost of car repair variance, for a specific insurance company compared to whole industry, on US map at desired geographical levels.
    CatPortal's LocWizard: An Innovative Approach to Mapping Insurance Risk Intelligence and Enabling Faster Decision Making
    Guru Rao, President, Catastrophe Systems,
    Aon Re Services, Inc.
    Deepak Badoni, Vice President, Catastrophe Systems, Aon Re Services, Inc.
    Instant access to policy and location level insurance data is one of the keys to faster decision making during and after a catastrophe event. Using Oracle Business Intelligence Enterprise Edition and Oracle MapViewer, Aon Re Global has developed an industry leading business intelligence and mapping tool that allows users to seamlessly navigate between reports and maps.
    The design was driven entirely by their clients’ need to answer key questions about their exposures and losses to catastrophes. The system uses a blend of custom programming and out-of-the-box functionality to create an interface that allows users to create powerful visualizations and reports with a few mouse-clicks – which previously took days, even weeks of manual effort.
    Unobtrusive Spatial Enablement of the Oracle Business Intelligence Suite at RL Polk
    Steven Pierce, Principal, Johnston McLamb
    Robert Murray, Technical Product Manager, RL Polk
    This presentation will describe RL Polk’s approach to integrating Oracle MapViewer into Oracle Business Intelligence Suite using Oracle MapViewer's Non-Spatial Data Provider. The NSDP brought an elegant and efficient approach to integrating spatial and non-spatial data in real time.
    Track B
    Oracle Spatial in Public Sector
    Maximizing the Value of Cuyahoga County-Wide GIS Using Oracle Spatial and Oracle Fusion Middleware
    J. Kevin Kelley, Geospatial Information Officer, Cuyahoga County
    G. Patrick Zhu, Software Systems Developer,Michael Baker Corporation
    Discover how to leverage Oracle Spatial and Fusion Middleware technologies to solve current complex county-wide Geospatial needs. Cuyahoga is implementing a cutting-edge architecture to support Grid computing, service-oriented architecture (SOA) and event-driven architecture (EDA) that delivers unprecedented flexibility, performance and scalability.
    Web Mapping with Microsoft Virtual Earth and Oracle 10g in U.S. EPA's Grant Tracking Systems
    Trevor Quinn, Principal Developer, Systalex Corporation
    This presentation details how a U.S. EPA enterprise web application was "geo-enabled" using Microsoft Virtual Earth and Oracle Application Express, and how the back-end Oracle 10g database was transformed into a spatial data engine for Virtual Earth. The presentation demonstrates how to make Oracle MapViewer maps available to commercial mapping APIs as cached tiles, and describes how to serve feature data directly from the database to Virtual Earth using AJAX and PL/SQL.
    Automatic Vehicles Monitoring System at Cotral
    Giovanni Corcione, Sales Consultant, Oracle Italy
    Paolo Castagno, Principal Consultant, Oracle Italy
    Diego Ponzi, Production Monitoring- Innovation Manager, Cotral SPA
    The Automatic Vehicles Monitoring (AVM) system at Cotral SPA monitors a fleet of 1600 buses that take about 4600 trips per day on a "near real time" basis. Through GPRS/HTTP, buses send information such as position, events, alarms, timing, schedule to a central system for storage and analysis in the Spatial Data Infrastructure, based on Oracle Spatial, for bus monitoring, mapping, reporting and trip planning. With Oracle’s linear referencing, buses can be located and displayed in real time. The Oracle MapViewer browser front-end renders interactive maps with dynamic bus positions according to routes and bus stop positions. A demo will be shown.
    3:00 – 3:30 p.m.
    Break / Vendor Booths
    3:30 – 5:15 p.m.
    TECHNICAL USE CASES – USER SESSIONS
    Track A
    Utilities Case Studies
    A Case Study: Re-engineering Cable Industry Business Processes with Spatial Database Technologies
    Dennis Beck, President, Spatial Business Systems
    This presentation highlights how a suite of customer-service related business applications are being deployed to change cable industry. An overview of the key design criteria will be presented along with highlights of the technical challenges that were faced in building a large-scale set of applications. Details of the applications will be highlighted as well as an overview of the technical implementation considerations and challenges. The presentation will conclude with a demonstration.
    Web based geospatial business applications - embedding the CAD/GIS client
    Philip O'Doherty, CEO, eSpatial Inc.
    Jon Polay, VP Sales, eSpatial Inc.
    This talk looks at the emerging drive towards development of geospatial GIS/CAD features within web enabled business applications. It has always been a goal to embed CAD like capabilities within business applications, but it is only recently that the required database and software infrastructure has made this possible. Leading Wireless Telecommunications Company, Verizon, will present its VEGA Application. This demo includes CAD data editing and manipulation features, seamlessly provided as an end to end process, all accessible within a pure web browser.
    Foundations of the New Enterprise: Managing Critical Business Data using Oracle Spatial
    Justin Lokitz, Director of Sales Engineering Organization Leica Geosystems Geospatial Imaging
    Washington Suburban Sanitary Commission (WSSC) is among the top ten Water and Waste Water utilities in the United States. Early on, to support its business needs with regards to geospatial data, WSSC had built a system using software from many traditional GIS vendors that lacked integration and support for many vital business processes. In 2006 WSSC moved all enterprise data to Oracle Spatial (vector and raster data) and implemented the Leica Geosystems' ADE suite.
    Modeling Utility Networks with Oracle Spatial Network Data Model
    Peter Manskopf, Senior Consultant, GE Energy
    The capabilities in Oracle Spatial allowed GE to build its next generation GIS client using Oracle Spatial as the data repository. The Oracle Spatial network data model provides the primitive spatial data structures required to model and meet the complex needs of utility customers. This presentation will give a technical overview how an electrical utility network can be modeled using the Oracle network topology model. The presentation will cover: How Oracle Spatial data structures can be used to model a connected utility network. How the SDO_NET API is used to perform different types of network tracing crucial to utilities. A demo will show the GE client performing network operations on Oracle Spatial.
         Track B
    Oracle Spatial in Public Sector & Map Production
    Using Oracle Spatial and MapViewer for Evaluation of Urban Area Development in Brazil
    Andre Luis Carvalho da Motta e Silva, Stategical Projects Director, CODEPLAN
    Gustavo Neves de Andrade Lemes, Consultant, Sete Serviços
    Fernando Targa, Development Director, GEMPI
    To meet information demand concerning income and job generation programs implemented by Brazil’s Federal District Economic Development Office (SDE), the Federal District Planning Company developed the Urban Areas Management System (SIGAU). Local areas are evaluated through performance indexes that take into account urban features, land plot, block and district, and analysis/simulation of a large volume of data from many governmental offices and systems. Thematic maps enable follow up and decision making on current programs. Oracle Spatial, GeoRaster and MapViewer provide a safe, high performance implementation platform. A demo will be shown.
    Creation, Publication & Update of Maps out of Databases
    Sebastien Lanoe, Product Marketing Manager, Lorienne SA
    The production of maps out of GIS databases is often a challenging process. Lorienne innovates with a new map production environment for map creation, map publication and map updates from Oracle Spatial, with a focus on high quality, production cost, data integrity and diversification of map products across media. The case study with Tele Atlas data stored in Oracle Spatial will address the benefits, the level of quality, the efficiency of the production process and its dedicated user-friendly environment.
    Reengineering Desktop Thick Workgroups into Web
    Rich Enterprise Clients
    Bryan Hall, Spatial Architect, L-3 Communications
    Jeff Walawender, Senior Software Engineer, L-3 Communications
    Cost cutting requires reengineering spatial solutions to directly address business requirements. But enterprise computing for spatial data has, with even "Web 2.0", required the user to lose the responsiveness and feedback that traditional desktop thick client GIS software has provided. We took a different approach in the re-engineering effort and concentrated on making it work as much like a traditional desktop thick client - while simplifying use, making editing more reliable, and actually speeding up rendering. All this, while only supporting one versioned Oracle Spatial database, and application tier for all users.
    Complete eGovernment solution at City of Bolzano
    Stefan Putzer, CreaForm
    Giulio Lavoriero, Director of Engineering, CreaForm
    The City of Bolzano, Italy has a unique, complete editing and publishing environment for geographical data. The Oracle Spatial-based enterprise editing environment supports import and export into geospatial tools from Bentley and ESRI, and network modeling from Oracle Spatial. Data is shared with GeoJAX, an easy-to-use geographical web browser that uses the Oracle MapViewer framework in combination with J2EE and AJAX for browsing Oracle Spatial data. This provides a flexible viewer supports spatial queries, and can be fully customized (style and functionality). Users can easily import any kind of geographical data from an ESRI file, edit it with a CAD precision functionality and make those data visible to anyone via the web in a very short time.
    5:00 – 5:30 p.m.
    Closing Reception
    Questions about the Oracle Spatial Users Conference? Contact us!
    Phone: 303-337-0513 Fax: 303-337-1001 E-mail: [email protected]

    Hi:
    Some updates regarding the Oracle Spatial User Conference 2008.
    1 - Presentations are now available at
    http://www.oracle.com/technology/products/spatial/htdocs/spatial_conf_0803_idx.html
    All submitted presentations have been posted except for the 3:30 track B slides. Those will be available in a day or two.
    2 - Survey for Conference Attendees: If you attended the conference, please take a few minutes to complete the brief survey: http://www.zoomerang.com/Survey/survey-intro.zgi?p=WEB227LQXQUMMD.
    Take the survey by April 2 to be entered in a random drawing to receive a copy of the Pro Oracle Spatial for Oracle Database 11g book. We'll also give away 10 GITA shoulder bags.
    Thanks to the speakers, sponsors, and participants for a great conference!

  • Oracle Spatial Performance with 10-20.000 users

    Does anyone have any experience when Oracle Spatial is used with say 20.000 concurrent users. I am not interested in MapViewer response time, but lets say there is:
    - an app using 800 different tables each having an sdo_geometry column
    - the app is configured with different tables visible on different view scales
    - let's say an average of 40-50 tables is visible at any given time
    - some tables will have only a few records, while other can hold millions.
    - there is no client side caching
    - clients can zoom in/out pan.
    Anwers I am interested in:
    - What sort of server would be required
    - How can Oracle serve all that data (each Refresh renders the map and retrieves the data over the wire as there is no client side caching).
    - What sort of network infrastructure would be required.
    - Can clients connect to different servers and hence use load balancing or does Oracle have an automatic mechanism for that?
    Thanks in advance,
    Patrick

    Patrick, et al.
    There are lots of things one can do to improve performance in mapping environments because of a lot of the visualisation is based on "background" or read-only data. Here are some "tips":
    1. Spatially sort read-only data.
    This tip makes sure that data that is close to each other in space are next to each other on disk! Dan gave a good suggestion when he referenced Chapter 14, "Reorganize the Table Data to Minimize I/O" pp 580- 582, Pro Oracle Spatial. But just as easily one can create a table as select ... where sdo_filter() where the filtering object is an optimized rectangle across the whole of the dataset. (This is quite quick on 10g and above but much slower on earlier releases.)
    When implementing this make sure that the created table is created such that its blocks are next to each other in the tablespace. (Consider tablespace defragmentation beforehand.) Also, if the data is READ ONLY set the PCTFREE to 0 in order to pack the data up into as small a number of blocks as possible.
    2. Generalise data
    Rendering spatial data can be expensive where the data is geometrically detailed (many vertices) esp where the data is being visualised at smaller scales than it was captured at. So, if your "zoom thresholds" allow 1:10,000 data to be used at 1:100,000 then you are going to have problems. Consider pre-generalising the data (see sdo_util.simplify) before deployment. You can add multiple columns to your base table to hold this data. Be careful with polygon data because generalising polygons that share boundaries will create gaps etc as the data is more generalised. Often it is better to export the data to a GIS which can maintain the boundary relationships when generalising (say via topological relationships).
    Oracle's MapViewer has excellent on-the-fly generalisation but here one needs to be careful. Application tier caching (cf Bryan's comments) can help here a lot.
    3. Don't draw data that is sub-pixel.
    As one zooms out objects become smaller and smaller until they reach a point where the whole object can be drawn within a single pixel. If you have control over your map visualisation application you might want to consider setting the SDO_FILTER parameter "min_resolution" flag dynamically so that its value is the same as the number of meters / pixel (eg min_resolution=10). If this is set Oracle Spatial will only include spatial objects in the returned search set if one side of a geometry's MBR is greater than or equal to this value. Thus any geometries smaller than a pixel will not be returned. Very useful for large scale data being drawn at small scales and for which no selection (eg identify) is required. With Oracle MapViewer this behaviour can be set via the generalized_pixels parameter.
    3. SDO_TOLERANCE, Clean Data
    If you are querying data other than via MBR (eg find all land parcels that touch each other) then make sure that your sdo_tolerance values are appropriate. I have seen sites where data captured to 1cm had an sdo_tolerance value set to a millionth of a meter!
    A corollary to this is make sure that all your data passes validation at the chosen sdo_tolerance value before deploying to visualisation. Run sdo_geom.validate_geometry()/validate_layer()...
    4. Rtree Spatial Indexing
    At 10g and above lots of great work went in to the RTree indexing. So, make sure you are using RTrees and not QuadTrees. Also, many GIS applications create sub-optimal RTrees by not using the additional parameters available at 10g and above.
    4.1 If your table/column sdo_geometry data contains only points, lines or polygons then let the RTree indexer know (via layer_gtype) as it can implement certain optimizations based on this knowledge.
    4.2 With 10g you can set the RTree's spatial index data block use via sdo_pct_free. Consider setting this parameter to 0 if the table/column sdo_geometry data is read only.
    4.3 If a table/column is in high demand (eg it is the most commonly used table in all visualisations) you can consider loading (a part of) the RTree index into memory. Now, with the RTree indexing, the sdo_non_leaf_tbl=true parameter will split the RTree index into its leaf (contains actual rowid reference) and non-leaf (the tree built on the leaves) components. Most RTrees are built without this so only the MDRT*** secondary tables are built. But if sdo_non_leaf_tbl is set to true you will see the creation of an additional MDNT*** secondary table (for the non_leaf part of the rtree index). Now, if appropriate, the non_leaf table can be loaded into memory via the following:
    ALTER TABLE MDNT*** STORAGE(BUFFER_AREA KEEP);
    This is NOT a general panacea for all performance problems. One should investigate other options before embarking on this (cf Tom Kyte's books such as Expert Oracle Database Architecture, 9i and 10g Programming Techniques and Solutions.)
    4.4 Don't forget to check your spatial index data quality regularly. Because many sites use GIS package GUI tools to create tables, load data and index them, there is a real tendency to not check what they have done or regularly monitor the objects. Check the SDO_RTREE_QUALITY column in USER_SDO_INDEX_METADATA and look for indexes with an SDO_RTREE_QUALITY setting that is > 2. If > 2 consider rebuilding or recreating the index.
    5. The rendering engine.
    Whatever rendering engine one uses make sure you try and understand fully what it can and cannot do. AutoDesk's MapGuide is an excellent product but I have seen it simply cache table/column data and never dynamically access it. Also, I have been at one site which was running Deegree and MapViewer and MapViewer was so fast in comparison to Deegree that I was called in to find out why. I discovered that Deegree was using SDO_RELATE(... ANYINTERACT ...) for all MBR queries while MapViewer was using SDO_FILTER. Just this difference was causing some queries to perform at < 10% of the speed of MapViewer!!!!
    6. Consider "denormalising" data
    There is an old adage in databases that is "normalise for edit, denormalise for performance". When we load spatial data we often get it from suppliers in a fairly flat or normalised form. In consort with spatial sorting, consider denormalising the data via aggregations based on a rendering attribute and some sort of spatial unit. For example, if you have 1 million points stored as single points in SDO_GEOMETRY.SDO_POINT which you want to render by a single attribute containing 20 values, consider aggregating the data using this attribute AND some sort of spatial BUCKET or BIN. So, consider using SDO_AGGR_UNION coupled with Spatial Analysis and Mining package functions to GROUP the data BY <<column_name>> and a set of spatial extents.
    6. Tablespace use
    Finally, talk to your DBA in order to find out how the oracle database's physical and logical storage is organised. Is a SAN being used or SAME arranged disk arrays? Knowing this you can organise your spatial data and indexes using more effective and efficient methods that will ensure greater scalability.
    7. Network fetch
    If your rendering engine (app server) and database are on separate machines you need to investigate what sort of fetch sizes are being used when returning data from queries to the middle-tier. Fetch sizes for attribute only data rows and rows containing spatial data can be, and normally are, radically different. Accepting the default settings for these sizes could be killing you (as could the sort_area_size of the Oracle session the application server has created on the database). For example I have been informed that MapInfo Pro uses a fixed value of 25 records per fetch when communicating with Oracle. I have done some testing to show that this value can be too small for certain types of spatial data. SQL Developer's GeoRaptor uses 100 which is generally better (but this one can modify this). Most programmers accept defaults for network properties when programming in ADO/ODBC/OLEDB/JDBC: just be careful as to what is being set here. (This is one of the great strengths of ArcSDE: its TCP/IP network transport is well written, tuneable and very efficient.)
    8. Physical Format
    Finally, while Oracle's excellent MapViewer requires data its spatial data to be in Oracle, other commercial rendering engines do not. So, consider using alternate, physical file formats that are more optimal for your rendering engine. For example, Google Earth Enterprise "compiles" all the source data into an optimal format which the server then serves to Google Earth Enterprise clients. Similarly, a shapefile on local disk to the application server (with spatial indexing) may be faster that storing the data back in Oracle on a database server that is being shared with other business databases (eg Oracle financials). If you don't like this approach and want to use Oracle only consider using a dedicated Oracle XE on the application server for the data that is read only and used in most of your generated maps eg contour or drainage data.
    Just some things to think about.
    regards
    Simon

  • Simon Greener's Morton Key Clustering in Oracle Spatial

    Hi folks,
    Apologies for the rambling.  With mattyschell heading for greener open source big apple pastures I am looking for new folks to bounce ideas and code off.  I was thinking this week about the discussion last autumn over spatial clustering.
    https://community.oracle.com/thread/3617887
    During the course of the thread we all kind of pooh-poohed spatial clustering as not much of solution, myself being one of the primary poohers.  Yet the concept certainly remains as something to consider regardless of our opinions.  The yellow book, the Greener/Ravada book, Simon's recent treatise (http://download.oracle.com/otndocs/products/spatial/pdf/biwa_2015/biwa2015_uc_comparativeperformance_greener.pdf), they all put forward clustering such that at the very least we should consider it a technique we should be able as professionals to do - a tool in the toolbox whether or not it always is the right answer.  I am mildly (very mildly) curious to see if Kothuri, Godfrind and Beinat will recycle their section on spatial clustering with the locked-down MD.HHENCODE into their 12c revision out this summer.  If they don't then what is the replacement for this technique?  If they do then we return to all of our griping about this ancient routine that Simon implies may date back to the CHS and their hhcode indexes - at least its not written in Java! 
    Anyhow, so I've been in the midst this month of refreshing some of the datasets I manage and considering clustering the larger tables whilst I am at it.  Do I really expect to see huge performance gains?   Well... not really.  But it does seem like something that should be easy to accomplish, certainly something that "doesn't hurt" and shows that I am on top of things (e.g. "checks the box").  But returning to the discussion from last fall, just what is the best way to do this in Oracle Spatial?
    So if we agree to ignore poor old MD.HHENCODE, then what?  Hilbert curves look nifty but no one seems to be stepping up with the code for them.  And this reroutes us back around to Simon and his Morton key code.
    http://www.spatialdbadvisor.com/oracle_spatial_tips_tricks/138/spatial-sorting-of-data-via-morton-key
    So who all is using Simon's code currently?  If you read that discussion from last fall there does not seem to be anyone doing so and we never heard back from Cat Person on either what he decided to do or what his name is.
    I thought I could take a stab at streamlining Simon's process somewhat to make things easier for myself to roll this onto many tables.  I put together the following small package
    https://github.com/pauldzy/DZ_SDO_CLUSTER/tree/master/Packages
    In particular I wanted to bundle up the side issues of how to convert your lines and polygons into points, automate things somewhat and provide a little verification function to see what results look like.  So again nothing that Simon does not already walk through on his webpage, just make it bit easier to bang out on your tables without writing a separate long SQL process for each one.
    So for example to use Simon's Morton key logic, you need to know the extent envelope of the data (in order to define a proper grid).  So if its a large table, you'd want to stash the envelope info in the metadata.  You can do this with the update_metadata_envelope procedure or just suffer through the sdo_aggr_mbr each time if you don't want to go that route (I have one table of small watershed polygons that takes about 9 hours to run sdo_aggr_mbr upon).  So just run things at the sql prompt
    SELECT
    DZ_SDO_CLUSTER.MORTON_UPDATE(
        p_table_name => 'CATCHMENT_NP21'
       ,p_column_name => 'SHAPE'
       ,p_grid_size => 1000
    FROM dual;
    This will return the update clause populated with the values to use with the morton_key wrapper function, e.g. "morton_key(SHAPE,160.247133275879,-17.673722530871,.0956820001136141,.0352063207508021)".  So then just paste that into an update statement
    UPDATE foo
    SET my_morton_key = dz_sdo_cluster.morton_key(
        SHAPE
       ,160.247133275879
       ,-17.673722530871
       ,.0956820001136141
       ,.0352063207508021
    Then rebuild your table sorting on the morton_key.  I just use the TOAD rebuild table tool and manually add the order by clause to the rebuild script.  I let TOAD do all the work of moving the indexes, constraints and grants to the new table.  I imagine there are other ways to do this.
    The final function is meant to be popped into Oracle mapviewer or something similar to show your family and friends the results.
    SELECT
    dz_sdo_cluster.morton_visualize(
        'NHDPLUS'
       ,'NHDFLOWLINE_NP21_ACU'
       ,'SHAPE'
       ,'OBJECTID'
       ,'100'
       ,10000
       ,'MORTON_KEY'
    FROM dual;
    Look Mom, there it is!
    So anyhow this is first stab at things and interested in feedback or suggestions for improvement.  Did I get the logic correct?  Don't spare my feelings if I botched something.  Note that like Simon I passed on the matter of just how to determine the proper grid size.  I've been using 1000 for the continental US + Hawaii/PR/VI and sitting here this morning I think that probably is too large.  Of course it depends on the size of the geometries and thus the density of the resulting points.  With water features this can vary a lot from place to place, so perhaps 1000 is okay.  What would the algorithm be to determine a decent grid size?  It occurs to me I could tell you the average feature count per morton key value, okay well its about 10.  That seems small to me.  So I could see another function in this package that returns some kind of summary on the results of the keying to tell you if your grid size estimate was reasonable.
    Cheers and Happy Saturday,
    Paul

    I've done some spatial clustering testing this week.
    Firstly, to reiterate the purpose of spatial clustering as I see it:  spatial clustering can be of benefit in situations where frequent window based spatial queries are made.  In particular it can be very useful in web mapping scenarios where a map server is requesting data using SDO_FILTER or SDO_ANYINTERACT and there is a need to return the data as quickly as possible.  If the data required to satisfy the query can be squeezed into as few blocks as possible, then the IO overhead is clearly reduced.
    As Bryan mentioned above, once the data is in the buffer cache, then the advantage of spatial clustering is reduced.  However it is not always possible to get/keep enough of the data in the buffer cache, so I believe spatial clustering still has merits, particularly if it can be implemented alongside spatial partitioning.
    I ran the tests using an 11.2.0.4 database on my laptop.  I have a hard disk rather than SSD, so the effects of excessive IO are exaggerated.  The database is configured with the default 8kb block size.
    Initially, I created a table PARCELS:
    create table parcels (
    id            integer,
    created_date  date,
    x            number,
    y            number,
    val1          varchar2(20),
    val2          varchar2(100),
    val3          varchar2(200),
    geometry      mdsys.sdo_geometry,
    hilbert_key  number);
    I inserted 2.8 million polygons into this table.  The CREATED_DATE is the actual date the polygons were captured.  I populated val1, val2 and val3 with string values to pad the rows out to simulate some business data sitting alongside the sdo_geometry.
    I set X,Y to the first ordinate of the polygon and then set hilbert_key = sdo_pc_pkg.hilbert_xy2d(power(2,31), x, y).
    I then created 4 tables to base the tests upon:
    PARCELS_RANDOM:  Ordered by dbms_random.random - an absolute worst case scenario.  Unrealistic, but worthwhile as a benchmark.
    PARCELS_BASE_DATE:  Ordered by CREATED_DATE.  This is probably pretty close to how the original source data is structured on disk.
    PARCELS_RTREE:  Ordered by RTree.  Achieved by inserting based on an SDO_FILTER query
    PARCELS_HILBERT:  Ordered by the hilbert_key attribute
    As a first test, I counted the number of blocks required to satisfy an SDO_FILTER query.  E.g.
    select count(distinct(dbms_rowid.rowid_block_number(rowid)))
    from parcels_rtree
    where sdo_filter(geometry,
                    sdo_geometry(2003, 2157, null, sdo_elem_info_array(1, 1003, 3),
                                    sdo_ordinate_array(644232,773809, 651523,780200))) = 'TRUE';
    I'm assuming dbms_rowid.rowid_block_number(rowid) is suitable for this.
    I ran this on each table and repeated it over three windows.
    Results:
    So straight off we can see that the random ordering gave pretty horrific results as the data required to satisfy the query is spread over a large number of blocks.  The natural date based clustering was far better. RTree and Hilbert based clustering reduced this by a further 50% with Hilbert just nosing out RTree.
    Since web mapping is the use case I am most likely to target, I then setup a test case as follows:
    Setup layers in GeoServer for each of the tables
    Used a script to generate 1,000 random squares over the extent of the data, ranging from 200m to 500m in width and height.
    Used JMeter to make a WMS request for a png of the each of the 1,000 windows.  JMeter was run sequentially with just one thread, so it waited for each request to complete before starting the next.  I ran these tests 3 times to balance out the results, flushing the buffer cache before each run.
    Results:
    Again the random ordering performed woefully bad - somewhat exacerbated by the quality of the disk on my laptop.  The natural date based clustering performed far better.  RTree and hilbert based clustering further reduced the time by more than half.
    In summary, the results suggest that spatial clustering is worth the effort if:
    the data is not already reasonably well clustered
    you've got a decent quantity of data
    you're expecting a lot of window based queries which need to be returned as quickly as possible
    you don’t expect to be able to fit all the data in the buffer cache
    When it comes to deciding between RTree and Hilbert (or Morton/z-order or any other space filling curve method).... I found that the RTree method can be a bit slow on large datasets, although this may not matter as a one off task.  Plus it requires a spatial index on the source table to start off with.  The key based methods are based on an xy, so for lines and polygons there is an intermediate step to extract an xy.  I would tend to recommend this approach if you also partition the data based on a subset of the cluster key.
    Scripts are available here: https://github.com/john-otoole/oracle_spatial_cluster_test
    John

  • Spatial Performance with join

    I have a Oracle Spatial table with 3.5 million rows plus another auxillary table with 3.5 million rows. A query over these two tables joined returns a full result (250 rows) based on one to one join - here's an example:
    Select count(*)
    from F, N
    where F.id = n.id and (F.GEOM,mdsys.sdo_geometry (2003,8307, null, mdsys.sdo_elem_info_array (1,1003,1), mdsys.sdo_ordinate_array(-120.0,49.5,-119.0,49.5,-119.0,60.35,-120.0,60.35,-120.0,49.5)),
    'mask= ANYINTERACT querytype=WINDOW') = 'TRUE') AND N.PNUM = '4';
    It takes an average of 35 seconds to get the full result set back. I've gathered statistics, tweaked memory parameters and this is the best I can get. Does anyone have any suggestions?

    This is an interesting problem. It looks like Oracle is doing the right thing for each of the table accesses - use the index and fetch by rowid.
    The only thing you have to play with if you don't go to materialized views or temp tables is how the results of the two table queries are joined.
    You don't have a lot of options. Hash join seems to be slow, but you don't know if it is faster or slower compared with nested loops or merge join.
    I'd compare what you have done with something like the following to test nested loops:
    select /*+ no_merge use_nl (f1,n1) */ count(*)
    from
    (select id
    from f
    where sdo_anyinteract ( F.GEOM,
    sdo_geometry (2003,8307, null, sdo_elem_info_array (1,1003,1),
    sdo_ordinate_array (-120.0,49.5,-119.0,49.5,-119.0,60.35,
    -120.0,60.35,-120.0,49.5))) = 'TRUE') f1,
    (select id
    from n
    where n.pnum='4') n1
    where f1.id=n1.id ;
    and presort with a merge join hint to see how it performs:
    select /*+ no_merge use_merge (f1,n1) */ count(*)
    from
    (select id
    from f
    where sdo_anyinteract ( F.GEOM,
    sdo_geometry (2003,8307, null, sdo_elem_info_array (1,1003,1),
    sdo_ordinate_array (-120.0,49.5,-119.0,49.5,-119.0,60.35,
    -120.0,60.35,-120.0,49.5))) = 'TRUE'
    order by id) f1,
    (select id
    from n
    where n.pnum='4'
    order by id) n1
    where f1.id=n1.id ;
    It might be that you already have the best Oracle can do, but I'd be curious to know how you make out.
    Dan Abugov
    VP Software Support and Services
    Acquis Inc.

  • Oracle Spatial operator SQL statement help

    I have a 3D elevation point feature class (elev) and a polygon feature class (Building) loaded in Oracle Spatial. I am trying to update the "HEIGHT" attribute of the "Building" Feature class using the average elevation of "elev" feature class. Here below is the SQL statement I used, which generated the same value for all buildings. The avg(elevation) returns the average elevation of all points within all building polygons, not all points within ONE polygon.
    Please help and thanks.
    update building
    set height = (select avg(elevation)
    from elev e, building b
    where sdo_anyinteract(e.shape, b.shape) = 'TRUE');

    Hi,
    try this
    update building b
    set height = (select avg(elevation)
    from elev e where sdo_anyinteract(e.shape, b.shape) = 'TRUE');
    Udo

  • Trigger populate georaster (min, max, average)

    Hello all,
    I'm new with Oracle spatial, I saw the documentations and I have not found an answer for my question...
    I frequently add georasters in the CITY_IMAGES table. They have 1 band and a number in each point.
    I have an other georaster which store the min, max and average values for each point.
    So how can I do to update these values each time I add a new georaster in my table CITY_IMAGES ? I think I need to create a trigger, but don't know what to do..
    Thanks for your answers,
    Best regards

    Then you can create a dml trigger on the table CITY_IMAGES, when you get a new georaster object, in the trigger, recalculate  min/max/average value for each cell, and then update the other georaster object with these values.

  • Oracle Spatial performance question

    All,
    I am doing a performance test on Oracle 11g Spatial. I am simulating doing searches in 10 degree by 10 degree windows over 6M+ images, six arc minutes per side. Here is my spatial query construction:
    String intersectSQL = "SELECT A.name, A.GEOMETRY.Get_WKT() " +
    "FROM six_amin_polygons A " +
    "WHERE SDO_RELATE(A.GEOMETRY,?, " +
    "'mask=inside+coveredby+overlapbdyintersect')='TRUE'";
    where the question mark is replaced by the geometry structure of the search window. The results for the first few searches are fast, then the query times balloon very quickly. PostGIS/PostgreSQL performs these searches in an average time of 30 s per window.
    Here are the initial (first four rows) of Oracle Spatial results:
    area_idx area_name sql_query_time number_results
    0 S80.0W90.0 3890 10100
    1 S80.0W80.0 3124 10100
    2 S80.0W70.0 186484 10100
    3 S80.0W60.0 183077 10100
    Any ideas? Am I using the best mask for image/area intersection? Please advise.
    Thanks,
    Jeff

    With anyinteract you get
    inside+coveredby+overlapbdyintersect+touch
    since you are comparing polygons to polygons.
    Do you want polygons that touch the window geometry in the result ? Do you want all the geometries
    that have some kind of intersection with the window query ? Then you should use ANYINTERACT mask.
    siva

  • Calculating Round Trip Time from Non-Spatial Log Data

    Hi,
    I have a log table which holds the vehicles' "entrance" an "exit" dates (dd.mm.yyyy) to/from regions like that:
    VEHICLE_ID REGION IN_TIME OUT_TIME
    1001 CUST_A 01.01.2007 03.01.2007
    1001 CUST_B 05.01.2007 06.01.2007
    1002 CUST_C 04.01.2007 06.01.2007
    1001 BASE_A 08.01.2007 11.01.2007
    1002 CUST_D 11.01.2007 12.01.2007
    1001 CUST_A 15.01.2007 15.01.2007
    1001 CUST_F 18.01.2007 19.01.2007
    1001 CUST_A 19.01.2007 20.01.2007
    1002 CUST_E 16.01.2007 19.01.2007
    in fact, this table was created for reporting "average wait time on each customer" but then i calculated "round trip time" by assigning "base region" ( vehicle round trip starts from "base region" then comes back to here, etc.. ) and creating some rules like "if it passes from CUST_A and CUST_D, it is 'ROUND TRIP A'"..
    vehicles' voyage report is like that :
    TRIP STARTDATE ENDDATE
    TRIP_A 01.01.2007 13.01.2007
    UNDEFINED 13.01.2007 16.01.2007
    TRIP_B 16.01.2007 02.01.2007
    now, trips are more complicated and i want to use spatial mechanism. any advise?
    thanks,
    Cihan.

    Stephen Rodriguez wrote:You don't need to worry about the round trip between the AP and WLC.  Just make sure the phone to phone is goodHTH,
    Steve
    Please remember to rate useful posts, and mark questions as answered
    Thanks Steve,
    That makes Sense, as post authentication the phone to phone would be ofcourse less than 150ms when the traffic is locally switched.
    But I dont understand why the recommendation in the D&D guide "Roundtrip latency must not exceed 300 milliseconds  (ms) for data and 100 ms for voice and data between the access point  and the controller"
    I think it is applicable if the traffic is centrally switched?? or is there more to it, for roaming perhaps?
    Thanks
    Jino

  • Anyone have on opinion on the usage of SECUREFILE LOBs for spatial data?

    Hi folks,
    From everything I read the usage of SECUREFILE LOBs over BASICFILE LOBs appears to be a nobrainer.
    http://www.oracle.com/technetwork/database/options/compression/overview/securefiles-131281.pdf
    Yet the default table creation remains set to create BASICFILE LOBs so my 11g spatial data is still sitting in BASICFILE LOBs (also I need to push it to and from 10g for the moment).
    Should we all be changing over to SECUREFILE LOB storage as a matter of course on 11g? Is there a big payoff for VARRAY storage beyond compression? A little payoff?
    Oracle says SECUREFILE LOBs "dramatically improve performance". Hey, I want some of that!
    http://www.oracle.com/newsletters/sap/products/database/oradb11g-features.html
    But yet there just has not been much traffic on the topic.
    Pro Oracle Spatial mentions weakly on page 250 that
    "Secure File LOBs are expected to be faster than BASIC LOBs".
    but seems to only be referring to the spatial index.
    Here Godfrind says to use them with points clouds but only trumpets the compression aspect.
    http://www.ncg.knaw.nl/Studiedagen/09PointClouds/presentations/PointCloud_14_AlbertGodfrind.pdf
    For those of us using ArcSDE, its pretty easy to add the SECUREFILE LOB keyword to the dbtune keyword that governs creation of the spatial column. But not a peep on the topic from the ESRI folks that I can find.
    Anyone have any experiences or opinions to share? I am hoping to finally leave 10g behind one day this year and putting together a migration plan. Should converting the LOBs be a first day bullet item or just something to consider sometime down the road?
    Thanks!
    Paul

    I have been writing on the benefits of rounding ordinates on storage and performance.
    That series has not yet been finished and fully published but my main finding (backed by stats/graphs) is that rounding ordinates does have an effect on performance as average coordinates per feature increases.
    As part of the work I also looked at SECUREFILE LOB storage in the hope that some sort of zlib/rle etc compression of the sdo_ordinates would substantially reduce storage size and thus increase performance.
    The reason I looked at this is because a customer has migrated from SDEBINARY to SDO_GEOMETRY (though most others are going from SDEBINARY to SDE.ST_GEOMETRY) and saw a substantial increase in storage costs. They also have taken a performance hit in some aspects of their operations but that may be due to ArcSDE programmer SDO_GEOMETRY access issues rather than any real issues with Oracle performance (though the perception is very much that Oracle is to blame). SDEBINARY was always a compressed storage format and with SDE.ST_GEOMETRY being WKB (of some description) also has a low storage footprint.
    The result of my work on LOB storage, though not published, is that I agree with Paul: there is little benefit for a lot of pain.
    However, I do still the whole question of storage size as being an area of benefit if reductions could be made as this is a necessary precursor for spatial being able to reap benefits if column-oriented storage ever migrates down to standard Oracle database.
    regards
    Simon

  • How do I change Spatial quality, Temporal quality, in FCP?

    I have been trying to copy the settings over from the youtube sharing in FCP7 to match them in FCP6. So far I have been able to change everything but the quality of certian settings. For example this is the settings for the youtube sharing
    Spatial quality: 50
    Min. Spatial quality: 50
    Temporal quality: 50
    Min. temporal quality: 50
    Average data rate: 8 (Mbps)
    and this is the settings for mine
    Spatial quality: 75
    Min. Spatial quality: 25
    Temporal quality: 50
    Min. temporal quality: 25
    how do I change these settings to match that were in the FCP7?

    Most don't.
    It depends on your install approach.
    http://www.google.com/search?hl=en&client=safari&rls=en&q=youtubecompressorpreset&btnG=Search&aq=f&oq=&aqi=

  • $50,000 spatial server -- you spec it...

    You have $50,000 to spend on a new database server whose purpose it to serve up spatial data for ad hoc analysis and realtivley simple spatial queries. Our current server is a good NT box, but is very slow loading data into Spatial 8.1.7, and is limited to 2G of memory. We want to run 9i; What would you do? Win2000? A cheap Solaris box? What other considerations? How much RAM, RAID? How many CPU's?

    While I can agree that unless you already have a game, the chances of you actually getting one done in the allotted time are slim, and I can also agree that the prizes are not quite the sort that a lot of us might really like, I think there is still a chance for someone to make a serious contest entry in the short time given.
    I have put about a whole fifteen minutes of thought into this idea, so if it sounds half-baked, well... it is. :) However, I think that there is a better than average chance that someone could get a small scale, decent game completed in 2.5 if they limit their scope and use one of the many preexisting, open-source game engines out there as a base for their game. Your chances of finishing something in the short time are infinitely more likely if you use a preexisting game engine and then just build something on top of it. For instance, say you wanted to make a 2D game of some sort, then just extend the Golden T engine or the GAGE API and spend a greater portion of the time making assets like sound and image files. Yeah, you may not be able to program Quake 3 in the allotted time, but you could at least get something decent going.
    Also, I like TheDavid's idea of just getting something done to get your name out there and on the docket. You may not win, but you may get noticed, and if you're looking for a job, then maybe that is a consideration worth entertaining. Besides, if you just make games for the fun of it, why not do so and get something for it, no matter how remote your chances are?
    And, if all else fails, 4K contest entries sounds like a good fallback plan. :)
    -Dok

  • Average & SUM in a Single criteria in the Pivot View

    Hi,
    Can you plese let me know whether it is possible to have SUM at the Grand Total level & Average on the Right Hand side of the Pivot Table view at the same time.
    Thanks & Regards
    Siva.

    Hi,
    The way you want to do it is not possible.
    What you can do is make a union of 2 reports one is using the Sum and the other the AVG and not use the Grand total provided by OBIEE.It's kind of creating your own Grand Totals
    Look at this thread it will help:
    http://obiee101.blogspot.com/2010/08/obiee-combine-with-similar-request.html
    If you are using pivot table you can think about a new calculate item as a GT
    http://varanasisaichand.blogspot.com/2010/12/aggregate-functions-on-grand-total.html
    Best regards
    Adil
    PS: Please don't forget to close the thread and assign points when your question is answered
    Edited by: adil.harrab on Jun 6, 2011 1:36 PM

  • IMAQ Resample performanc​e. Any better choice for 50% downsample​? (average 2x2 - 1 pixel)

    My video source is a 4 Mpixel (2k x 2k resolution) USB3 camera. This is displaying a live image OK in Labview at 45 fps using only 20% CPU.  So far, so good.
    I added a "IMAQ Resample" block to downsize this to 1024 x 1024 image. That works with almost no additional processing time if I select "Zero Order" interpolation (eg. plain subsample to value of nearest pixel). However, I want to average each 2x2 block (4 pixels) in the input image into 1 output pixel. I *think* that is the effect of selecting Bi-Linear interpolation. Doing that works, but takes about 45% of CPU.  I want to do some other processing but am worried I will quickly run out of CPU time and start dropping frames.
    Is there any better way to do this simple 50% downsize (2x2 average), that would take less CPU overhead, or is this the best way?

    Hi jbeale1,
    In NI-MAX (Measurement & Automation Explorer) select your camera. Under the  'Acquistion Attributes' tab do you see an option to change the Video Mode of your camera to a different resolution? If your camera supports it, it would be more efficient to change the resolution there.
    If not , here is a little more info regarding the IMAQ Resample VI:
    http://zone.ni.com/reference/en-XX/help/370281P-01​/imaqvision/imaq_resample/
    You are correct, the Bi-Linear option uses a more intensive interpolation technique which is why it is more taxing on your CPU. I hope this is helpful.
    Robert S.
    Applications Engineer
    National Instruments

Maybe you are looking for

  • How inco terms is getting picked in PO

    Hi All,        Can you please let me know hoe inco terms is getting picked in PO when we create a PO. Note : In this case I've already checked the vendor master and info record. The inco term is not maintained in both these places but still it gets p

  • HT201304 how to disable siri? siri will not stop talking, and is making it impossible to use my iphone 4s.

    Siri will not stop talking, virtually making using my phone impossible. Please help me disable siri?

  • Ever recurring syntax errors

    I have a syntax error that keeps recurring. I need help please.

  • Importing Metadata

    I have aperture 2.1.4, and i have been stressing out abot how to organise my massive amount of photos in the photographic office i work in, you may have seem some of my other posts. Anyway i have decided to create a new library on my local Macintosh

  • BorderManager Cluster problems

    I have set up a 2 node NW 6.5 SP8 cluster to run BorderManager 3.9 SP2. I don't have a 'Split Brain Detector' (SBD) partition; the servers only monitor each other through the LAN heartbeat signal that is being sent by the master and replies by the sl