Certification, Customer Performance Benchmarks & Lidar Technical Sessions At Oracle Spatial Summit

Here is a spotlight on some training sessions that may be of interest, offered at LI/Oracle Spatial Summit in DC, May 19-21.  www.locationintelligence.net/dc/agenda . 
Preparing for the Oracle Spatial Certification Exam
Steve Pierce, Think Huddle & Albert Godfrind, Oracle
Learn valuable strategies and review technical topics with the experts who developed the exam – and achieve your Oracle Spatial Specialist Certification with the most efficient effort. This session will enable you to master difficult topics (such as GeoRaster, 3D/LIDAR support, topology) quickly through clear examples and demos. Sample questions and exam topic breakdown will be covered. Individual certifications can also apply to requirements for organizations seeking Oracle PartnerNetwork Specialized status.
Offered as both a Monday technical workshop (preregistration required), and Wednesday overview session.
Content in this session is only available at the Oracle Spatial Summit.
The performance debate is over: Spatial 12c Performance / Customer Benchmark Track
Hear the results of customer benchmarks testing the performance of the 12c release of Spatial and Graph – with results up to 300 times faster. In this track, Nick Salem of Neustar and Steve Pierce of Think Huddle will share benchmarks with actual performance results already realized.
Customers can now address the largest geospatial workloads and see performance increases of 50 to 300 times for common vector analysis operations. With just a small set of configuration changes – and no changes to application code – applications can realize these significant performance improvements. You’ll also learn tips for accelerating performance and running benchmarks with your applications on your systems.
Effectively Utilize LIDAR Data In Your Business Processes
Daniel Geringer, Oracle
Many organizations collect large amounts of LIDAR, or point cloud data for more precise asset management. ROI of the high costs associated with this type of data acquisition is frequently compromised by the underutilization of the data. This session focuses on ways to leverage Oracle Engineered Systems to highly compress and seamlessly store LIDAR data and effectively search it spatially in its compressed form to enhance your business process. Topics covered included loading, compressing, pyramiding, searching and generation of derivative products such as DEMs, TINs, and Contours.
Many other technical sessions and tracks will cover spatial technologies with depth and breadth.
Customers including Garmin, Burger King, US Census Bureau, US DOJ, and more will also present use cases using MapViewer & Spatial in location intelligence/BI, transportation, land management and more.
We invite you to join the community there.  For more information about topics, sessions and experts at Oracle Spatial Summit 2014, visit http://www.locationintelligence.net/dc/agenda .  This training event is held in conjunction with Directions' Location Intelligence - bringing together leaders in the LI ecosystem.
For a 10% registration discount, become a member of the Spatial SIG, LinkedIn (http://www.linkedin.com/groups/Oracle-Spatial-Graph-1848520?gid=1848520&trk=skills    ) 
or Google+ Spatial & Graph groups (https://plus.google.com/communities/108078829007193480508 ).  Details posted there.

Similar Messages

  • Performance Impact of Using Sessions

    All,
    I have a general question regarding the performance impact of using Sessions
    within a Servlet. Specifically, I'm working with a group that wants to use
    the Session object to store customer contacts temporarily. The application
    is expected to service 2000 concurrent users, each maintaining no more then
    200 contacts each of which is about 1KB. The planned configuration is for a
    cluster of 10 instances of WLS. That said do we have any benchmark data
    that might assist in the session replication design, i.e. using in-memory
    across the cluster, using DBMS, etc.?
    Any information would be appreciated.
    Later,
    Jim Harrald
    BEA Systems
    Office: (901)263-4097
    Cell: (901)568-9267
    email: [email protected]

    All,
    I have a general question regarding the performance impact of using Sessions
    within a Servlet. Specifically, I'm working with a group that wants to use
    the Session object to store customer contacts temporarily. The application
    is expected to service 2000 concurrent users, each maintaining no more then
    200 contacts each of which is about 1KB. The planned configuration is for a
    cluster of 10 instances of WLS. That said do we have any benchmark data
    that might assist in the session replication design, i.e. using in-memory
    across the cluster, using DBMS, etc.?
    Any information would be appreciated.
    Later,
    Jim Harrald
    BEA Systems
    Office: (901)263-4097
    Cell: (901)568-9267
    email: [email protected]

  • (268625273) Q WSI-29 Can you give any performance benchmarks for WLS web services?

    Q<WSI-29> Can you give any performance benchmarks for WLS web services?
    A<WSI-29>: It is very difficult to quantify performance aspects of web services
    since they depend on so many variables including but not limited to: backend system
    processing by stateless session beans and message driven beans, size of XML SOAP
    message sent, system hardware (CPU speed, parallel processing, RAM speed) and
    system software (JVM type and version of WebLogic server). However, let me point
    out that the EJB backend processing of requests both have the best possible scalability
    within the EJB2.0 specification (both stateless session and message driven beans
    can be pooled) and servlets have a proven scalable track record. Thus it should
    be possible to scale your web service deployment to meet demand. The overhead
    in processing XML within the servlet can be significant depending on the size
    of XML data (either as a parameter or a return type). While WLS6.1 does not have
    any features to address this performance concern, WLS7.0 will feature Serializer
    and Deserializer classes which can be dedicated to the XML to Java and Java to
    XML translation (they can also be automatically be generated from a DTD, XML Schema
    or regular JavaBean).
    It is true that web services are not the fastest way to process client requests
    but BEA is committed to making WebLogic server the fastest possible service provider.
    Adam

    see http://www.oracle.com/support/products/oas/sparc30/html/ows08811.html

  • Performance Benchmarks for WL

    Does anyone know what the definition of a "Transaction" is in the performance benchmarks published by Bea?
    How many session bean calls, entity beans calls, etc.
    THanks

    If you want to stick with Sun jdk, and don't want to see the error
    messages, set weblogic.system.nativeIO.enable=false in
    weblogic.properties. I wouldn't go to NT, because if you can afford it
    Solaris on SPARC will be your fastest most reliable platform. We have
    Solaris SPARC for production, and do our development on Lintel. For us
    staying in a POSIX environment overall keeps things like startup,
    environment setting, and make type scripts all on the same page.
    Ian
    Alexander Sommer wrote:
    >
    I'm currently evaluating WL 5.1 and installed it on linux.
    I have installed the SUN JDK 1.2.2 for linux which should use
    native threads.
    When I start WL 5.1 I get the following message:
    LD_LIBRARY_PATH=/usr/local/weblogic/weblogic/lib/linux:/usr/local/weblog
    ic/weblogic/lib/linux
    Warning: native threads are not supported in this release
    What does this mean?
    Also I get the following messages:
    Fri Apr 07 08:59:38 CEST 2000:<A> <Posix Performance Pack> Could not
    initialize POSIX Performance Pack.
    Fri Apr 07 08:59:38 CEST 2000:<E> <Performance Pack> Unable to load
    performance pack, using Java I/O.
    How much does this impact the performance? Is it better to use NT instead
    of linux?
    thanks,
    Alex--
    Ian R. Brandt
    Software Engineer
    Genomics Collaborative, Inc.
    99 Erie Street
    Cambridge, MA 02139
    (617)661-2400 Ext.244
    (617)661-8899 FAX
    [email protected]

  • Are there any performance benchmark tools for Flash?

    I am looking to benchmark Flash on various computers that I use.  I was surprised that the performance of Adobe Flash on my Intel i5 computer running Windows 7 Pro 64-bit OS and IE 10 was MUCH WORSE than running on a Windows 7 Pro 32-bit on an Intel i3 computer running the same browser. 
    I have tried running both 32-bit IE and 64-bit IE and get the same general bad performance on the 64-bit Windows OS. I would like to find a tool to benchmark these various computers so that I can establish baseline performance while I explore finding a fix Adobe Flash on a 64-bit OS.
    Can someone suggest some tools for Flash performance benchmarking? Thank you.

    The best advise we can really give you is that both companies offer free trials and you should download them both and see which works best for you.  I own Parallels Desktop v6, and VMWare Fusion v3.  For me, VMWare s better for some things, but Parallels is better for most.  Depending on what you do and how you use your applications your milage may vary.
    One other note to keep in mind.  Since Apple is looking to release a new OS version in the very near future, you might want to hold-off a bit on our vitualization choice just yet.  I would exect that both companies will be working on a new release for support/compatibilty of the new MacOS, so you might want to wait to see if there are any other changes that make you want to lean towards one or the other...

  • How to use custom performance counters to monitor my app?

    Hi,
    I have a app which will read messages from a specified service bus queue. I want to monitor how many messages it read from service bus queue in the last minutes. And I want to use a custom performance counter to achieve that.
    Then I first initialized the PerformanceCounterCategory by the following code:
    var counterCreationData = new CounterCreationData
    CounterName = "numberOfMessages",
    CounterHelp = "help",
    CounterType = PerformanceCounterType.NumberOfItems32
    var counterCollection = new CounterCreationDataCollection();
    counterCollection.Add(counterCreationData);
    PerformanceCounterCategory.Create(
    "CustomCounterCategory",
    "CategoryDescription",
    PerformanceCounterCategoryType.SingleInstance,
    counterCollection);
    Then when my code receive messages, I just call the IncrementBy() method to count the messages. below is the code snippet:
    private static readonly PerformanceCounter Counter = new PerformanceCounter("CustomCounterCategory", "numberOfMessages", string.Empty, false);
    private void OnReceive()
    var messages = _subscriptionClient.ReceiveBatch(32);
    var brokeredMessages = messages as IList<BrokeredMessage> ?? messages.ToList();
    if (messages != null && brokeredMessages.Any())
    Counter.IncrementBy(messages.Count);
    MessageReceived(this, brokeredMessages);
    When I check the data from the azure table, "WADPerformanceCountersTable".  The CounterValue is just simply increasing. It is not the numbers of received messages in last minutes.
    To get the numbers of received messages in last minutes, how should I write my code?
    BTW, I'm using the AZure SDK 2.5. The transfer interval is set to 1 min, sample rate set to 20 sec. This might be changed.
    Thanks.

    The performance counter type you're using is a simple counter. You should use rate counter instead. But the issue is that there's no per minute counter, by default the rate counters come as per second. Please check counter types from below MSDN page.
    https://msdn.microsoft.com/en-us/library/system.diagnostics.performancecountertype(v=vs.110).aspx

  • Performance benchmarks Win XP

    Does anyone know if there are any performance benchmarks available for Windows XP (via Boot Camp) running on various different Mac Pro, MacBook Pro, and iMac machines?
    I assume the more RAM, the greater BUS speed, the faster and more numerous the processors the better performance but I was looking for objective numbers to help me decide what system to buy.. I know that one of the PC site rated Windows on a Mac laptop very fast.

    Well, I don't know that anyone has run such benchmarks, but it's logical to assume that the faster the machine the faster XP will run on a Boot Camp installation since Windows is running natively on the hardware as though it were a normal PC. What's important to you - speed or mobility? If speed then get a Mac Pro. If mobility then get a MacBook Pro.

  • Performance benchmarks?

    Hello,
    Has anyone done any performance benchmarking on Portal Server? (Or know
    where I can find such information?)
    I'm curious to know:
    1. Given iPlanet's recommended hardware, how many concurrent users does
    this support with decent performance (SSL to gateway to non-SSL server)?
    2. How does the product scale...e.g. if I wanted to have 250 concurrent
    gateway users (running SSL), what hardware is recommended? 500 users?
    1000 users?
    3. Has anyone tried (or does the product even support) the use of SSL
    accelerator cards on the gateway machine?
    Any help is appreciated.
    Thanks,
    Murray

    Hi.
    We did some benchmarking in january, at the SUN iforce center in holland,
    using iPlanet Portal Server SP2 Hotpatch3.
    We focused on measuring:
    1) Average time to log in to portal(with default channels, netlet set up) as
    a function of simultaneous users
    2) Average time to do a "standard operation" inside portal.
    Our results supported the numbers from iPlanet saying thath one could have
    250 simultaneous "secure" users pr CPU in this setup. (SP2 HP3)
    This "guaranteed" number of simultaneous users pr CPU for iPS 3 SP3 is said
    to be increased to 1500.
    We used SUN E 220 s with 2x450 MHZ CPUs and 1 G RAM.
    The performance increased remarkable when we went from gateway and server on
    1 machine, to separate gateway and server.
    Tore
    "Murray Bodor" <[email protected]> wrote in message
    news:[email protected]..
    Hello,
    Has anyone done any performance benchmarking on Portal Server? (Or know
    where I can find such information?)
    I'm curious to know:
    1. Given iPlanet's recommended hardware, how many concurrent users does
    this support with decent performance (SSL to gateway to non-SSL server)?
    2. How does the product scale...e.g. if I wanted to have 250 concurrent
    gateway users (running SSL), what hardware is recommended? 500 users?
    1000 users?
    3. Has anyone tried (or does the product even support) the use of SSL
    accelerator cards on the gateway machine?
    Any help is appreciated.
    Thanks,
    Murray

  • Performance benchmarks within BPC

    Has anyone ever performed any benchmarks with BPC?
    I know there is such a wide spectrum of elements that go into benchmarking a BPC solution that most benchmarks can not easily be correlated to each other. (i.e., Platform, environment, dimensionality, hierarchies u2026) Any estimates would be a great benefit.
    Iu2019m hoping someone has performed some serious performance benchmarks such as;
    u2022 Data throughput, (time to submit data such as 1000 cells per minute).
    u2022 Query time, (x row by x column expansion and query).
    Thanks in advance for your input and assistance!
    Fletch

    Hi David,
    I agree with you about your point on performance benchmarks. However I don't know if you are already aware of the performance tuining guide available at:
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/9016a6b9-3309-2b10-2d91-8233450139c1
    Also you may want to have a look at OSS note 1124332.
    Regards
    Pravin

  • Performance/ benchmarking web mapping

    Hi all,
    Does anyone have any benchmarks, stress tests, performance tests on web mapping / (internet) GIS/ visualisation apps&tools - not neccesarily with Oracle Spatial as database.
    Stuff like redraws per second, point selections per second, maximum concurrent users (all for a standard dataset) is what I'm looking for. Descriptions of testprotocols would be interesting as well.
    thanx, Mark
    Mark C. Prins
    Spatial Fusion Specialist
    CARIS Geographic Informations Systems BV
    CARIS BV Support: [email protected]
    CARIS BV Marketing/Sales: [email protected]
    tel. +31413296010
    fax. +31413296012
    WWW Marine: http://www.caris.nl
    WWW Spatial Components : http://www.spatialcomponents.com
    Mgr. van Oorschotstraat 13, 5473 AX Heeswijk
    PO.box 47, 5473 ZG HEESWIJK-DINTHER
    the Netherlands
    Join us for CARIS 2002
    Revolutionizing Geomatics in Norfolk, Virginia
    from September 3rd - 6th.
    Email [email protected] for full details.

    For detail on Studio query and response times, you can enable performance metrics/logging.  See Monitoring the Performance of Queries : Configuring the amount of metrics data to record [1] for details.
    [1] http://docs.oracle.com/cd/E37502_01/studio.300/studio_admin/toc.htm#Configuring%20the%20amount%20of%20metrics%20data%20to%20record

  • Cost and performance benchmarks for security

    Looking for cost and performance benchmarks on the
    Cryptograhic toolkit - 8i supplied package,
    Oracle Label Security (OLS)
    Virtual Private Database (VPD)
    Please send any suggestions to my email address - thanks

    Looking for cost and performance benchmarks on the
    Cryptograhic toolkit - 8i supplied package,
    Oracle Label Security (OLS)
    Virtual Private Database (VPD)
    Please send any suggestions to my email address - thanks

  • Any performance benchmarks Jserv and OC4J

    Are any performance benchmarks available for OC4J and Jserv?
    Satish

    I had the Seagate 100 GB, 7200 rpm drive in my previous MacBook Pro Core Duo. But when I upgraded to a new MacBook Pro Core 2 Duo recently, I opted for the 160 GB, 5400 rpm drive (I got an Hitachi) because I also found the older 100 GB drive to be a bit small for my needs.
    I was concerned about performance at first too, but after I got the new machine and used it for a while, I realized that the real world performance difference between the 5400 and 7200 drive was negligible at best (at least for the things I do, which includes some video editing, music production, software development, and games). In fact, I really couldn't tell the difference between the drives. And tests done by Bare Feats actually show that the 5400 rpm drive outperforms the 7200 rpm drive when you have 74 GB of data or more. (http://barefeats.com/mbcd7.html)
    So if you need the extra storage capacity, you can't go wrong with the new 160 GB, 5400 rpm drive. I highly recommend it.
    I hope this helps!

  • Berkeley DB Sessions at Oracle OpenWorld Sept 19 - 23

    All,
    Just posting some of the Berkeley DB related sessions at Oracle OpenWorld this year. Hope to see you there.
    Session ID:      S317033
    Title:      Oracle Berkeley DB: Enabling Your Mobile Data Strategy
    Abstract:      Mobile data is everywhere. Deploying applications and updates, as well as collecting data from the field and synchronizing it with the Oracle Database server infrastructure, is everyone?s concern today in IT. Mobile devices, by their very nature, are easily damaged, lost, or stolen. Therefore, enabling secure, rapid mobile deployment and synchronization is critically important. By combining Oracle Berkeley DB 11g and Oracle Database Lite Mobile Server, you can easily link your mobile devices, users, applications, and data with the corporate infrastructure in a safe and reliable manner. This session will discuss several real-world use cases.
    Speaker(s):
    Eric Jensen, Oracle, Principal Product Manager
    Greg Rekounas, Rekounas.org,
    Event:      JavaOne and Oracle Develop
    Stream(s):      ORACLE DEVELOP, DEVELOP
    Track(s):      Database Development
    Tags:      Add Berkeley DB
    Session Type:      Conference Session
    Session Category:      Case Study
    Duration:      60 min.
    Schedule:      Wednesday, September 22, 11:30 | Hotel Nikko, Golden Gate
    Session ID:      S318539
    Title:      Effortlessly Enhance Your Mobile Applications with Oracle Berkeley DB and SQLite
    Abstract:      In this session, you'll learn the new SQL capabilities of Oracle Berkeley DB 11g. You'll discover how Oracle Berkeley DB is a drop-in replacement for SQLite; applications get improved performance and concurrency without sacrificing simplicity and ease of use. This hands-on lab explores seamless data synchronization for mobile applications using the Oracle Mobile Sync Server to synchronize data with the Oracle Database. Oracle Berkeley DB is an OSS embedded database that has the features, options, reliability, and flexibility that are ideal for developing lightweight commercial mobile applications. Oracle Berkeley DB supports a wide range of mobile platforms, including Android.
    Speaker(s):
    Dave Segleau, Oracle, Product Manager
    Ashok Joshi, Oracle, Senior Director, Development
    Ron Cohen, Oracle, Member of Technical Staff
    Eric Jensen, Oracle, Principal Product Manager
    Event:      JavaOne and Oracle Develop
    Stream(s):      ORACLE DEVELOP, DEVELOP
    Track(s):      Database Development
    Tags:      Add 11g, Berkeley DB, Embedded Development, Embedded Technology
    Session Type:      Hands-on Lab
    Session Category:      Features
    Duration:      60 min.
    Schedule:      Wednesday, September 22, 16:45 | Hilton San Francisco, Imperial Ballroom A
    Session ID:      S317032
    Title:      Oracle Berkeley DB: Adding Scalability, Concurrency, and Reliability to SQLite
    Abstract:      Oracle Berkeley DB and SQLite: two industry-leading libraries in a single package. This session will look at use cases where the Oracle Berkeley DB library's advantages bring strong enhancements to common SQLite scenarios. You'll learn how Oracle Berkeley DB?s scalability, concurrency, and reliability significantly benefit SQLite applications. The session will focus on Web services, multithreaded applications, and metadata management. It will also explore how to leverage the powerful features in SQLite to maximize the functionality of your application while reducing development costs.
    Speaker(s):
    Jack Kreindler, Genie DB,
    Scott Post, Thomson Reuters, Architect
    Dave Segleau, Oracle, Product Manager
    Event:      JavaOne and Oracle Develop
    Stream(s):      ORACLE DEVELOP, DEVELOP
    Track(s):      Database Development
    Tags:      Add Berkeley DB
    Session Type:      Conference Session
    Session Category:      Features
    Duration:      60 min.
    Schedule:      Monday, September 20, 11:30 | Hotel Nikko, Nikko Ballroom I
    Session ID:      S317038
    Title:      Oracle Berkeley DB Java Edition: High Availability for Your Java Data
    Abstract:      Oracle Berkeley DB Java Edition is the most scalable, highest performance Java application data store available today. This session will focus on the latest features, including triggers and sync with Oracle Database as well as new performance and scalability enhancements for high availability, with an emphasis on real-world use cases. We'll discuss deployment, configuration, and maximized throughput scenarios. You'll learn how you can use Oracle Berkeley DB Java Edition High Availability to increase the reliability and performance of your Java application data storage.
    Speaker(s):
    Steve Shoaff, UnboundID Corp, CEO
    Alex Feinberg, Linkedin,
    Ashok Joshi, Oracle, Senior Director, Development
    Event:      JavaOne and Oracle Develop
    Stream(s):      ORACLE DEVELOP, DEVELOP
    Track(s):      Database Development
    Tags:      Add Berkeley DB
    Session Type:      Conference Session
    Session Category:      Features
    Duration:      60 min.
    Schedule:      Thursday, September 23, 12:30 | Hotel Nikko, Mendocino I / II
    Session ID:      S314396
    Title:      Java SE for Embedded Meets Oracle Berkeley DB at the Edge
    Abstract:      This session covers a special case of edge-to-enterprise computing, where the edge consists of embedded devices running Java SE for Embedded in combination with Oracle Berkeley DB Java Edition, a widely used embedded database. The approach fits a larger emerging trend in which edge embedded devices are "smart"--that is, they come equipped with an embedded (in-process) database for structured persistent storage of data as needed. In addition, these devices may optionally come with a thin middleware layer that can perform certain basic data processing operations locally. The session highlights the synergies between both technologies and how they can be utilized. Topics covered include implementation and performance optimization.
    Speaker(s):      Carlos Lucasius, Oracle , Java Embedded Engineering
    Carlos Lucasius works in the Java Embedded and Real-Time Engineering product team at Oracle Corporation, where he is involved in development, testing, and technical support. Prior to joining Sun (now Oracle), he worked as an consultant to IT departments at various companies in both North-America and Europe; specific application domains he was involved in include artificial intelligence, pattern recognition, advanced data processing, simulation, and optimization as applied to complex systems and processes such as intelligent instruments and industrial manufacturing. Carlos has presented frequently at scientific conferences, universities/colleges, and corporations across North-America and Europe. He has also published a number of papers in refereed international journals covering applied scientific research in abovementioned areas.
    Event:      JavaOne and Oracle Develop
    Stream(s):      JAVAONE
    Track(s):      Java for Devices, Card, and TV
    Session Type:      Conference Session
    Session Category:      Case Study
    Duration:      60 min.
    Schedule:      Tuesday, September 21, 13:00 | Hilton San Francisco, Golden Gate 1
    Session ID:      S313952
    Title:      Developing Applications with Oracle Berkeley DB for Java and Java ME Smartphones
    Abstract:      Oracle Berkeley DB is a high-performance, embeddable database engine for developers of mission-critical systems. It runs directly in the application that uses it, so no separate server is required and no human administration is needed, and it provides developers with fast, reliable, local persistence with zero administration. The Java ME platform provides a new, rich user experience for cell phones comparable to the graphical user interfaces found on the iPhone, Google Android, and other next-generation cell phones. This session demonstrates how to use Oracle Berkeley DB and the Java ME platform to deliver rich database applications for today's cell phones.
    Speaker(s):      Hinkmond Wong, Oracle, Principal Member of Technical Staff
    Hinkmond Wong is a principal engineer with the Java Micro Edition (Java ME) group at Oracle. He was the specification lead for the Java Community Process (JCP) Java Specification Requests (JSRs) 36, 46, 218, and 219, Java ME Connected Device Configuration (CDC) and Foundation Profile. He holds a B.S.E degree in Electrical Engineering from the University of Michigan (Ann Arbor) and an M.S.E degree in Computer Engineering from Santa Clara University. Hinkmond's interests include performance tuning in Java ME and porting the Java ME platform to many types of embedded devices. His recent projects include investigating ports of Java ME to mobile devices, such as Linux/ARM-based smartphones and is the tech lead of CDC and Foundation Profile libraries. He is the author of the book titled "Developing Jini Applications Using J2ME Technology".
    Event:      JavaOne and Oracle Develop
    Stream(s):      JAVAONE
    Track(s):      Java ME and Mobile, JavaFX and Rich User Experience
    Tags:      Add Application Development, Java ME, Java Mobile, JavaFX Mobile, Mobile Applications
    Session Type:      Conference Session
    Session Category:      Tips and Tricks
    Duration:      60 min.
    Schedule:      Monday, September 20, 11:30 | Hilton San Francisco, Golden Gate 3
    I think I have them all. If I have missed any, please reply and I can update the list, or just post the info in the reply.
    Thanks,
    Greg Rekounas

    are any links to access these Seminars??

  • Oracle Spatial Performance with 10-20.000 users

    Does anyone have any experience when Oracle Spatial is used with say 20.000 concurrent users. I am not interested in MapViewer response time, but lets say there is:
    - an app using 800 different tables each having an sdo_geometry column
    - the app is configured with different tables visible on different view scales
    - let's say an average of 40-50 tables is visible at any given time
    - some tables will have only a few records, while other can hold millions.
    - there is no client side caching
    - clients can zoom in/out pan.
    Anwers I am interested in:
    - What sort of server would be required
    - How can Oracle serve all that data (each Refresh renders the map and retrieves the data over the wire as there is no client side caching).
    - What sort of network infrastructure would be required.
    - Can clients connect to different servers and hence use load balancing or does Oracle have an automatic mechanism for that?
    Thanks in advance,
    Patrick

    Patrick, et al.
    There are lots of things one can do to improve performance in mapping environments because of a lot of the visualisation is based on "background" or read-only data. Here are some "tips":
    1. Spatially sort read-only data.
    This tip makes sure that data that is close to each other in space are next to each other on disk! Dan gave a good suggestion when he referenced Chapter 14, "Reorganize the Table Data to Minimize I/O" pp 580- 582, Pro Oracle Spatial. But just as easily one can create a table as select ... where sdo_filter() where the filtering object is an optimized rectangle across the whole of the dataset. (This is quite quick on 10g and above but much slower on earlier releases.)
    When implementing this make sure that the created table is created such that its blocks are next to each other in the tablespace. (Consider tablespace defragmentation beforehand.) Also, if the data is READ ONLY set the PCTFREE to 0 in order to pack the data up into as small a number of blocks as possible.
    2. Generalise data
    Rendering spatial data can be expensive where the data is geometrically detailed (many vertices) esp where the data is being visualised at smaller scales than it was captured at. So, if your "zoom thresholds" allow 1:10,000 data to be used at 1:100,000 then you are going to have problems. Consider pre-generalising the data (see sdo_util.simplify) before deployment. You can add multiple columns to your base table to hold this data. Be careful with polygon data because generalising polygons that share boundaries will create gaps etc as the data is more generalised. Often it is better to export the data to a GIS which can maintain the boundary relationships when generalising (say via topological relationships).
    Oracle's MapViewer has excellent on-the-fly generalisation but here one needs to be careful. Application tier caching (cf Bryan's comments) can help here a lot.
    3. Don't draw data that is sub-pixel.
    As one zooms out objects become smaller and smaller until they reach a point where the whole object can be drawn within a single pixel. If you have control over your map visualisation application you might want to consider setting the SDO_FILTER parameter "min_resolution" flag dynamically so that its value is the same as the number of meters / pixel (eg min_resolution=10). If this is set Oracle Spatial will only include spatial objects in the returned search set if one side of a geometry's MBR is greater than or equal to this value. Thus any geometries smaller than a pixel will not be returned. Very useful for large scale data being drawn at small scales and for which no selection (eg identify) is required. With Oracle MapViewer this behaviour can be set via the generalized_pixels parameter.
    3. SDO_TOLERANCE, Clean Data
    If you are querying data other than via MBR (eg find all land parcels that touch each other) then make sure that your sdo_tolerance values are appropriate. I have seen sites where data captured to 1cm had an sdo_tolerance value set to a millionth of a meter!
    A corollary to this is make sure that all your data passes validation at the chosen sdo_tolerance value before deploying to visualisation. Run sdo_geom.validate_geometry()/validate_layer()...
    4. Rtree Spatial Indexing
    At 10g and above lots of great work went in to the RTree indexing. So, make sure you are using RTrees and not QuadTrees. Also, many GIS applications create sub-optimal RTrees by not using the additional parameters available at 10g and above.
    4.1 If your table/column sdo_geometry data contains only points, lines or polygons then let the RTree indexer know (via layer_gtype) as it can implement certain optimizations based on this knowledge.
    4.2 With 10g you can set the RTree's spatial index data block use via sdo_pct_free. Consider setting this parameter to 0 if the table/column sdo_geometry data is read only.
    4.3 If a table/column is in high demand (eg it is the most commonly used table in all visualisations) you can consider loading (a part of) the RTree index into memory. Now, with the RTree indexing, the sdo_non_leaf_tbl=true parameter will split the RTree index into its leaf (contains actual rowid reference) and non-leaf (the tree built on the leaves) components. Most RTrees are built without this so only the MDRT*** secondary tables are built. But if sdo_non_leaf_tbl is set to true you will see the creation of an additional MDNT*** secondary table (for the non_leaf part of the rtree index). Now, if appropriate, the non_leaf table can be loaded into memory via the following:
    ALTER TABLE MDNT*** STORAGE(BUFFER_AREA KEEP);
    This is NOT a general panacea for all performance problems. One should investigate other options before embarking on this (cf Tom Kyte's books such as Expert Oracle Database Architecture, 9i and 10g Programming Techniques and Solutions.)
    4.4 Don't forget to check your spatial index data quality regularly. Because many sites use GIS package GUI tools to create tables, load data and index them, there is a real tendency to not check what they have done or regularly monitor the objects. Check the SDO_RTREE_QUALITY column in USER_SDO_INDEX_METADATA and look for indexes with an SDO_RTREE_QUALITY setting that is > 2. If > 2 consider rebuilding or recreating the index.
    5. The rendering engine.
    Whatever rendering engine one uses make sure you try and understand fully what it can and cannot do. AutoDesk's MapGuide is an excellent product but I have seen it simply cache table/column data and never dynamically access it. Also, I have been at one site which was running Deegree and MapViewer and MapViewer was so fast in comparison to Deegree that I was called in to find out why. I discovered that Deegree was using SDO_RELATE(... ANYINTERACT ...) for all MBR queries while MapViewer was using SDO_FILTER. Just this difference was causing some queries to perform at < 10% of the speed of MapViewer!!!!
    6. Consider "denormalising" data
    There is an old adage in databases that is "normalise for edit, denormalise for performance". When we load spatial data we often get it from suppliers in a fairly flat or normalised form. In consort with spatial sorting, consider denormalising the data via aggregations based on a rendering attribute and some sort of spatial unit. For example, if you have 1 million points stored as single points in SDO_GEOMETRY.SDO_POINT which you want to render by a single attribute containing 20 values, consider aggregating the data using this attribute AND some sort of spatial BUCKET or BIN. So, consider using SDO_AGGR_UNION coupled with Spatial Analysis and Mining package functions to GROUP the data BY <<column_name>> and a set of spatial extents.
    6. Tablespace use
    Finally, talk to your DBA in order to find out how the oracle database's physical and logical storage is organised. Is a SAN being used or SAME arranged disk arrays? Knowing this you can organise your spatial data and indexes using more effective and efficient methods that will ensure greater scalability.
    7. Network fetch
    If your rendering engine (app server) and database are on separate machines you need to investigate what sort of fetch sizes are being used when returning data from queries to the middle-tier. Fetch sizes for attribute only data rows and rows containing spatial data can be, and normally are, radically different. Accepting the default settings for these sizes could be killing you (as could the sort_area_size of the Oracle session the application server has created on the database). For example I have been informed that MapInfo Pro uses a fixed value of 25 records per fetch when communicating with Oracle. I have done some testing to show that this value can be too small for certain types of spatial data. SQL Developer's GeoRaptor uses 100 which is generally better (but this one can modify this). Most programmers accept defaults for network properties when programming in ADO/ODBC/OLEDB/JDBC: just be careful as to what is being set here. (This is one of the great strengths of ArcSDE: its TCP/IP network transport is well written, tuneable and very efficient.)
    8. Physical Format
    Finally, while Oracle's excellent MapViewer requires data its spatial data to be in Oracle, other commercial rendering engines do not. So, consider using alternate, physical file formats that are more optimal for your rendering engine. For example, Google Earth Enterprise "compiles" all the source data into an optimal format which the server then serves to Google Earth Enterprise clients. Similarly, a shapefile on local disk to the application server (with spatial indexing) may be faster that storing the data back in Oracle on a database server that is being shared with other business databases (eg Oracle financials). If you don't like this approach and want to use Oracle only consider using a dedicated Oracle XE on the application server for the data that is read only and used in most of your generated maps eg contour or drainage data.
    Just some things to think about.
    regards
    Simon

  • Customer Reference regardinf Migration from COBOL to Oracle Developer

    Hi,
    Can anyone please share customer references regarding migration from COBOL to Oracle Developer.
    Appreciate if someone can help.
    Thanks,

    Can you please share some tools / methodology to perform such migrations particularly from COBOL to Oracle Developer??BIG question. Actually, we did it the hard way -- completely rewrote the entire application. Took 4 years with 8 to 10 developers and analysts.
    The only tools we used were what you get from Oracle: Forms, SQL Plus, Reports. It was a huge project, done partly to overcome the Y2K century problems.
    AS for training, we did not really get any. We just waded in and started writing code ...which definitely improved after about 6 months.
    What sort of system are you looking to convert? How much on-line user interface does it currently have? How many data files?

Maybe you are looking for

  • Smart object layer from PS to AI. how to use??

    I successfully work AI vector objects as placed objects into PS by copy pasting or file/place into a smart object layer in PS, I find value in making changes to my vector object dimensions in AI and seeing them updated in PS, but I don`t know how to

  • Since updating to Firefox 4 on my OSX 10.5.8 mac Firefox will not go to any website

    Since updating to Firefox 4 on my OSX 10.5.8 mac Firefox will not go to any website. Firefox will launch but it won't load anything, the page is entirely blank - no home page loads - nothing. Also - It will not let me install any add ons. I've tried

  • Adding files to the library

    I have a few things I've recorded myself and some other audio I would love to use to expand my Soundtrack Pro library. What's the proper way (if there is one) to add files to the library so they show up in the application's library? Sorry if this has

  • How can I get firefox to allow my bank to redirect to another page without me having to "allow" it?

    Fire fox has started to prevent my bank from "automatically redirecting to an other page" I have to hit the "allow" button, two times. I have check tools, security etc., and can't find where to change it back to doing it automatically or adding as an

  • Help with Subaccounts Where Email is Linked to Yahoo

    I have been a Verizon customer for several years, and have always had my Verizon email linked to Yahoo.  At one point, I created a subaccount for my spouse, which she uses as her primary, personal email address.  I attempted today to create an additi