Query Cache to Derive Summary Data - Advice?

Good Evening, I am faced with a problem and I was hoping to get a nudge in the right direction.
     I have a cache that is storing the status of orders and their respective times they took to process in that status going though our system. I need to retrieve from that some average times. I need to calculate the average time it took for an order to go from status "A" (endtime) to status "B" (starttime). So essentially i will need to retrieve the difference between variables of two different statuses for every order then get the average of those. I know how to query the cache to get my result set and then obviously iterate across them and calculate the average. However I was wondering/hoping there was a more eloquent solution.
     I have been reading through the user guide (http://wiki.tangosol.com/display/COH32UG/Provide+a+Data+Grid) and reading up on Aggregators and Agents but there are no complete examples for me to view and am unsure if these approaches would help in my goal.
     Knowing my task, is there any approach i should focus on and if so is there documentation anywhere to support that?
     Thanks in Advance!
     -- Grant
     Our data structure looks something like this:
     class ClientReport{
          public long lasttimeaccessed;
     public long clientid;
          ...some other stuff....
          Map Orders; //map of Order Objects, keyed on orderid
     class Order{
          public long orderID;
          ...some other stuff....
          Map statuses //map of status changes, keyed on status
     class Status{
          public String status;
          public long starttime;
          public long endtime;     
     }

Hi Grant,
     first of all, you might want to create an index on the status changes, so that you can prefilter for the status changes, but for that you would need to implement a custom ValueExtractor implementation, because the status is twice nested in maps, and use that to create the index.
     You can add the index with that ValueExtractor with the addIndex(ValueExtractor, boolean, Comparator) method of the cache.
     The ValueExtractor would need to collect the status strings from each status change in each order, and return them as a collection.
     You can then use this index to filter for those report objects which do have at least one status changes ending up in the expected state.
     You could also create a similar ValueExtractor (let's name it ReportStatusObjectExtractor) which returns each Status objects from all orders in a report, and create an index also with this ValueExtractor.
     Please be aware that the ValueExtractor-s need to implement the equals() and the hashCode() method properly. In this case, because the extractors will have no parameters, you can implement equals() so that true is returned for all objects of the same class, and hashCode() can return 1.
     This index could then be used to speed up evaluation of the aggregator because you would not need to deserialize the entire report instance.
     You can now create an own InvocableMap.EntryAggregator implementation which should also implement the InvocableMap.ParallelAwareAggregator.
     The getParallelAggregator() method should return an instance of a Serializable class again implementing EntryAggregator, and in the aggregate method it should use the extract method on each entry with an instance of the second mentioned ValueExtractor (should be a constant) to extract the status objects belonging to the report instance from the index:
              public class ReportStatusStringExtractor implements ValueExtractor, Serializable {
           public static final ReportStatusStringExtractor INSTANCE = new ReportStatusStringExtractor();
           public Object extract(Object oTarget) {
             ClientReport report = (ClientReport) oTarget;
             // extract each status strings from each order object and return all of them in a collection
           public boolean equals(Object other) {
             return other!=null && other.getClass().equals(this.getClass());
           public int hashCode() {
             return 1;
         public class ReportStatusObjectExtractor implements ValueExtractor, Serializable {
           public static final ReportStatusObjectExtractor INSTANCE = new ReportStatusObjectExtractor();
           public Object extract(Object oTarget) {
             ClientReport report = (ClientReport) oTarget;
             // extract each status objects from each order object and return all of them in a collection
           public boolean equals(Object other) {
             return other!=null && other.getClass().equals(this.getClass());
           public int hashCode() {
             return 1;
         public class OrderAverageAggregator implements InvocableMap.ParallelAwareAggregator {
           public static final OrderAverageAggregator  INSTANCE = new OrderAverageAggregator();
           public InvocableMap.EntryAggregator getParallelAggregator() {
             return OrderAverageParallelAggregator.INSTANCE;
           public Object aggregateResults(Collection collResults) {
             // join results from all servers
             // the object returned from this method will be returned from the cache.aggregate call
           public Object aggregate(Set setEntries) {
             // the object returned from this method will be returned
             // from the cache.aggregate call if invoked on a replicated cache
             return aggregateResults(
               Collections.singletonList(
                 OrderAverageParallelAggregator.INSTANCE.aggregate(
                   setEntries)));
         public class OrderAverageParallelAggregator implements InvocableMap.EntryAggregator, Serializable {
           public static final OrderAverageParallelAggregator INSTANCE = new OrderAverageParallelAggregator();
         // aggregate method of the parallel aggregator
           public Object aggregate(Set setEntries) {
             Iterator iter = setEntries.iterator();
             while (iter.hasNext()) {
               InvocableMap.Entry entry = iter.next();
               Collection statusObjects = (Collection)  
                 entry.extract(ReportStatusObjectExtractor.INSTANCE);
               // ... do averaging ...
             // the objects returned from this method on each storage-enabled node will be passed in a collection
             // to the aggregateResults method of the OrderAverageAggregator class
             return (Serializable) result;
         // code doing the querying
         NamedCache cache = CacheFactory.getCache(...);
         Object result = cache.aggregate(new ContainsFilter(ReportStatusStringExtractor.INSTANCE, "expectedStatus"),
               OrderAverageAggregator.INSTANCE);
         // code adding the indexes, this needs to run only once after the cluster is started
         NamedCache cache = CacheFactory.getCache(...);
         cache.addIndex(ReportStatusStringExtractor.INSTANCE, false, null);
         cache.addIndex(ReportStatusObjectExtractor.INSTANCE, false, null);
                   The class files for the two extractor classes and the OrderAverageParallelAggregator class must reside in the classpath of the storage-enabled cache JVMs.
     I hope this helps,
     Robert

Similar Messages

  • SAP BW 3.5 Query Cache - no data for queries

    Dear experts,
    we do have a problem with the SAP BW Query Cache (BW 3.5). Sometimes the
    problem arise that after the queries have been pre-calculated using web templates there are no figures available when running a web-report, or when doing a drilldown or filter navigation. A solution to solve that issue is to delete the cache for that query, however this solution is not reasonable or passable. The problem occurs non-reproducible for different queries.
    Is this a "normal" error of the SAP BW we have to live with, or are there any solutions for it? Any hints are greatly appreciated.
    Thanks in advance & kind regards,
    daniel

    HI Daniel
    Try to wirk without cache for those queries.
    Anyway, you should check how the cache option is configured for those queries .
    You can see that, in RSRV tx
    Hope this help

  • Query Governor - Summary Data - Only when not out of date?

    In Oracle Discoverer Desktop edition the user can choose how Summary Data are used:
    1. Always, when available
    2. Only when summary data are not out of date (stale)
    3. Never
    What exactly is the meaning of "Only when summary data are not out of date"?
    In previous versions of Discoverer you had the opportunity to specify how old (in days) summary tables were acceptable to you.
    (By the way: In the Danish translation of Discoverer the word "not" has been omitted, so you have the opportunity to use summary data only when they are out of date :-)
    Best regards
    Torben

    Actually it doesn't matter if you have Applecare or not because Apple simply isn't recognizing this as an issue even though multitudes of people have experienced it. If they don't see it as a problem, then they obviously don't have a solution, and so taking your laptop in through apple care is totally useless.
    I've talked to 3 customer support representatives over the phone and they all said they were 'unaware of the issue' and would not even attempt to help. It's very disappointing.

  • Query Not reflected with Updated Data

    Dear Experts,
    I am facing a Problem in query data updation.    Data has been daily updating in infoprovider successfully ,But when user run query through Bex he is always shown old data.  Then I go to RSRT and generate the query and data got updated.
    Every Time for new data updation I need to Generate the query.
    What could be reason for this.  Is this related to cache data ?
    Any Advise .
    Thanks in Advance.

    Dear Michael,
    This problem is coming only for one Multiprovider.   Running this program would affect all other queries also . This will Delay the reporting.
    Any other reason why query is reflected with old data Though infoprovide is loaded with new data.

  • BW3.5 - Query Cache - InfoObject decimal formatting

    Hello all,
    I built a bw query, which displays key figures.  Each key figure uses the decimal place formatting from the key figure infoobject (In Query Designer, the properties for each keyfigure for Decimal places, is set to "[From Key Figure 0.00]").
    I decided to change the InfoObject key figure to have 0 decimals places (in BEx formatting tab for the Key Figure).  Now, when I open up query designer, and look at the properties for the Key figure, it still is set to "[From Key Figure 0.00]" (it should be "[From Key Figure 0]" to reflect the Key Figure Infoobject change. 
    I tried to generate the report using RSRT, and deleting the query cache, but it still shows up with two decimal places.  Has anyone encountered this problem before?  I am trying to avoid removing the Key Figure infoobject from the query and readding it to reflect the change.
    Thanks!

    Hello Brendon
    You have changed the KF infoObject to show only 0 decimal( no decimal)..that is okay but in query KF property you have selected with 2 decimal so data will be displayiing in 2 decimal...the query setting is local and have priority over KF InfoObject settings...
    If you notice in KF property in query u will have one option from the field somethjing which means whatever is defined by KF infoObject...just select that now onwards u will get only those many decimal which u have defined in KF InfoObject
    Thanks
    Tripple k

  • How to add more summary data in the MIP of Compensation workbench

    Hi all,
    In the MIP (incentive plan) of the compensation workbench, we need to add more summary data in the summary section. I checked that all the summary data in that section are stored in the table 'ben_cwb_summary'.
    Even we can change the VIEW object for that page (using extension), we can't retrieve it because the new summary data/information we need to add are not stored in the table 'ben_cwb_summary'. We need to write our query to summerize data.
    How to implement it?
    Thanks in advanced!
    Jane

    faiz2000 wrote:
    > How to add more one record in the same time?
    >
    > Please I need your help to add more one record in the
    same time, how can I do
    > that?
    >
    > I have only one text field and the value it is linked
    from other table, if
    > user he bushes bottoms the all data it will copy it to
    new record:
    >
    > Ex.: <input name="textfield" type="text"
    >
    value="<%=(Recordset1.Fields.Item("webgroup").Value)%>">
    >
    >
    Use "Repeat Region"
    Mick

  • Issues with Query Caching in MII

    Hi All,
    I am facing a strange problem with Query caching in MII query. Have created one xacute query and set cache duration 30 sec. The associated BLS with the query retrieves data from SAP system. In the web page this value is populated by executing an iCommand. Followings are the steps I followed -
    Query executed for first time, it retrives data from SAP correctly. Lets say value is val1
    At 10th sec Value in SAP changed to val2 from val1.
    Query excuted at 15th sec, it gives value as val1. Which is expected as it gives from cache.
    Query is executed 1t 35th sec, it gives value as val2 retriving from SAP system. Which is correct.
    Query executed at 40th sec, it gives value as val1 from cache. Which is not expected.
    I have tried with java cache clear in browser and JCo cache of the server.
    Same problem I have seen for tag query also..
    MII Version - 12.0.6 Build(12)
    Any thoughts on this.
    Thanks,
    Soumen

    Soumen Mondal,
    If you are facing this problem from the same client PC and in the same session, it's very strange.. but if there are two sessions running on the same PC this kind of issue may come..
    Something about caching:
    To decide whether to cache a query or not, consider the number of times you intend to run the query in a minute. For data changing in seconds, and queried in minutes, I would recommend not to cache query at all.
    I may give a typical example for query caching in a query on material master cached for 24 hours, where we know that after creating material master, we are not creating a PO on the same day.
    BR,
    SB

  • Continuous Query Caching - Expensive?

    Hello,
    I have had a look at the documentation but I still cannot find a reasonable answer to the following question : How expensive are continuous query caches?
    Is it appropriate to have many of them?
    Is the following example an acceptable usage of Continuous query caching (does it scale?)
    In the context of a web application:
    User logs onto a website
    User performs a "Search" for financial instruments
    A continuous query cache is created with a filter for those instruments returned (say, 50) to listen to price updates.
    If the user pages, or does another search, the query cache is released and a new one, with an updated filter, is created.
    Does it make a difference if we are using the extend client?

    Hi,
    So 100 CQCs is probably not too excessive depending on the configuration of the process instantiating the CQCs and the cluster size etc.
    Each CQC will hold its own set of deserialized keys and values, so yes they are distinct objects, although a CQC of 50 entries would not be very big.
    One query I have - you mention that this is a Web Application but you also mention an Extend Client. Is your Web App and Extend Client of the main cluster? Is there are reason why you did this, most people would make a Web App a storage disabled cluster member so it would perform a bit better. Providing the Web App sits on a server that is very close in network terms to the cluster (i.e. same switch) then I would make it part of the cluster - or is the Web App the thing that is in the "regional environment".
    If you are running CQCs over Extend then there used to be some issues with this if the Extend connection was lost. AFAIK this is supposed to be fixed in later patches so I would get 3.7.1.8 and make sure you test that the Web App continues to work and properly fails over if you kill its Extend connection. When the CQC fails over it will reinitialize all its data so you will need to cope with that if you are pushing changes based on the CQC.
    JK

  • When is query cache deleted?

    Hello All,
    I searched this forum and SAP for notes but couldn't find exactly when query cache is deleted.
    For example, we know when new data is loaded to an infocube, but what about when new master data is loaded?  If a query still has the existing transaction data loaded, but new master data is loaded (and a query output is reading and attribute), should it NOT read cache?
    What are the hard and fast rules?
    Thank you so much!

    Queries check to see if new data has been loaded to a cube/DSO since teh data has been cached. If new data has been loaded, the old cached data for that query is deleted and the new execution of the query will read the data from the database.  A new master data load would not effect the last load date on a cube/DSO, so it would not cause the the cache to be deleted for a quary on the cube/ODS.  If you had set the Master Data InfoObject to be an InfoProvider and had just a master data query (no cube/ODS), the cached master data query results should get deleted.
    As a matter of practice, Master Data should be loaded before loads to the cubes/ODS are done.
    As mentioned you can delete OLAP cache from tran RSRCACHE.  There is also a batch program you can run to delete entries from OLAP cache.  You can specify specific query results to be deleted, and you can also use it to delete all cached results that are older than a specifiied number of days.

  • Continuous Query Cache Local caching meaning

    Hi,
    I'm enconter following problem when I was working with continuous query cache with local caching TRUE.
    I was able to insert data into coherence cache and read as well.
    Then I stopped the process and try to read the data in the cache for keys which I inserted earlier.
    But I received NULL as the result.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), true);
    DerivedCQC.hpp
    * File: DerivedCQC.hpp
    * Author: srathna1
    * Created on 15 July 2011, 02:47
    #ifndef DERIVEDCQC_HPP
    #define     DERIVEDCQC_HPP
    #include "coherence/lang.ns"
    #include "coherence/net/cache/ContinuousQueryCache.hpp"
    #include "coherence/net/NamedCache.hpp"
    #include "coherence/util/Filter.hpp"
    #include "coherence/util/MapListener.hpp"
    using namespace coherence::lang;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::net::NamedCache;
    using coherence::util::Filter;
    using coherence::util::MapListener;
    class DerivedCQC
    : public class_spec<DerivedCQC,
    extends<ContinuousQueryCache> >
    friend class factory<DerivedCQC>;
    protected:
    DerivedCQC(NamedCache::Handle hCache,
    Filter::View vFilter, bool fCacheValues = false, MapListener::Handle hListener = NULL)
    : super(hCache, vFilter, fCacheValues, hListener) {}
    public:
    virtual bool containsKey(Object::View vKey) const
    return m_hMapLocal->containsKey(vKey);
    #endif     /* DERIVEDCQC_HPP */
    When I switch off the local storage flag to FALSE.
    I was able to read the data.
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("dist-AccountGCE"), AlwaysFilter::getInstance(), false);
    Ideally I'm expecting in true scenario when I'm connected with coherence all keys and values with locally synced up with cache and store it locally and for each update also it will get synched up.
    In false scenario it will hook into the coherence cache and read it from their for each key and cache values from that moment onwards. Please share how it is implemented underneath
    Thanks and regards,
    Sura

    Hi Wei,
    I found the issue when you declare you cache as an global variable then you won't get data in TRUE scenario and if you declare the cache during a method then you will retrieve data.
    Try this.......
    #include <iostream>
    #include <coherence/net/CacheFactory.hpp>
    #include "coherence/lang.ns"
    #include <coherence/net/NamedCache.hpp>
    #include <stdio.h>
    #include <stdlib.h>
    #include <pthread.h>
    #include <coherence/net/cache/ContinuousQueryCache.hpp>
    #include <coherence/util/filter/AlwaysFilter.hpp>
    #include <coherence/util/filter/EntryFilter.hpp>
    #include "DerivedCQC.hpp"
    #include <fstream>
    #include <string>
    #include <sstream>
    #include <coherence/util/Set.hpp>
    #include <coherence/util/Iterator.hpp>
    #include <sys/types.h>
    #include <unistd.h>
    #include <coherence/stl/boxing_map.hpp>
    #include "EventPrinter.hpp"
    using namespace coherence::lang;
    using coherence::net::CacheFactory;
    using coherence::net::NamedCache;
    using coherence::net::ConcurrentMap;
    using coherence::net::cache::ContinuousQueryCache;
    using coherence::util::filter::AlwaysFilter;
    using coherence::util::filter::EntryFilter;
    using coherence::util::Set;
    using coherence::util::Iterator;
    using coherence::stl::boxing_map;
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    int main(int argc, char** argv) {
    std::cout << "size: " << hCache->size() << std::endl;
    in above example you will see size is 0 for true case and size is equal to data size in the cache in false scenario.
    But if you declare the cache as below you will get the expected results as the documentation.
    int main(int argc, char** argv) {
    NamedCache::Handle hCache = DerivedCQC::create(CacheFactory::getCache("gce-Account"), AlwaysFilter::getInstance(), true);
    std::cout << "size: " << hCache->size() << std::endl;
    Is this a bug or is this the expected behaviour. According to my understanding this is a bug?
    Thanks and regards,
    Sura

  • Deriving system date

    hi
    i have to derive age , that is (current_date-birth_date) calculation must be done ,how can i derive current date in odi. how to calculate age from tat
    thanks

    if your target is oracle then in the interface you can directly use the current_date or sysdate mapped to stagging so that ODI populates the column while loading into I$.
    In case you are planning to fetch in the variable. you can write the query SELECT CURRENT_DATE FROM DUAL with any oracle schema and call the variable in refresh mode in the package.
    There is also another API called <%=odiRef.getSysDate("yyyy/mm/dd")%>

  • Query Cache in WLP 10.3

    Hi,
    Has anyone used query cache in WLP applications without using any ORM tool ?
    What all options do I have to get my frequently used queries to be cached?
    Thanks,
    CA

    If i understood you correctly, you want to do some sort of data caching in portal controller rather in DAO layer. You can use WLP caches in controller to do this coding using CacheFactory http://e-docs.bea.com/wlp/docs81/javadoc/com/bea/p13n/cache/CacheFactory.html . You can also cache contents from JSPs using tags http://e-docs.bea.com/wls/docs81/jsp/customtags.html#56944
    Edited by: user10704069 on Dec 22, 2008 10:56 AM

  • Reducing Database Call Techniques...query caching the only way?

    What's the most efficient way to reuse data that gets called on practically every page of an application?
    For instance, I have a module that handles all my gateways, sub pages and subgateways etc etc.  This will only change whenever a change is made to the page structure in the admin portion of the application.  It's really not necessary to hit the database everytime a page loads.  Is this a good instance to use query caching?  What are the pros, cons and alternatives?  I thought an alternative might be to store in a session, but that doesn't sound too ideal.
    Thanks!
    Paul

    What's the most efficient way to reuse data that gets called on practically every page of an application?
    That sounds like a question from the certification exam. The answer is to store the data in session or applicaton scope, depending on the circumstances. If the data depends on the user, then the answer is session. If the data persists from user to user, then it is application.
    admin portion of the application.
    Suggests users must log in. Otherwise you cannot distinguish admin from non-admin.
    This will only change whenever a change is made to the page structure in the admin portion of the application.
    Then I would go for storing the data in application scope, as the admin determines the value for everybody else. However, the session acope also has something to do with it. Since the changes are only going to occur in the admin portion, I would base everything on a variable, session.role.
    You cache the query by storing it directly in application scope within onApplicationStart in Application.cfc, like this:
    <cfquery name="application.myQueryName">
    </cfquery>
    The best place for the following code is within onSessionStart in Application.cfc.
    <!--- It is assumed here that login has already occurred. Your code checks whether
    session.role is Admin. If so, make the changes. --->
    <cfif session.role is 'admin'>
    <!--- Make changes to the data in application.myQueryName, otherwise do nothing --->
    </cfif>
    Added edit: On second thought, the best place for setting the application variable is in onApplicationStart.

  • Query cache,query monitor

    Hi
    wt is the purpose of Query monitor and query cache.. can u plz.. explain
    ponts assured*
    Regards
    Rekha

    As the verbeage indicates, query monitor is to monitor the runtime performance of BW queries. Query monitor is one of the tools in BW to monitor the query performance. The transaction to run query monitor is RSRT.
    In RSRT, you can exceute queries in various modes and you can to some extent enforece a query to be executed in a certain path; For example, you can simulate the execution of  q query without using a aggregate, without using cache, etc.
    In monitor you can also view how the query is getting executed and diagnose the possible causes as to why a query is running slow.
    Caching is to store the query results in the memory of the BW  system's application server.  If you cache a query, the run time performance  will improve considerably, because the result ste is stored in the meonry and every time when the query is run, the OLAP engine will not have to read the data base to fetch the records.
    The query caching has some limitations; if the query result changes, the cache will not help, because the new result set has to be again read from database and presented.
    You can get more on this in help.sap.com
    Ravi Thothadri

  • Query to return next 7 dates

    Hello,
    is there a way to return the next 7 dates just using a query... for example, I need a query that returns:
    select (I don't know that put here) from dual
    Date
    2012-10-05
    2012-10-06
    2012-10-07
    2012-10-08
    2012-10-09
    2012-10-10
    2012-10-11
    If possible, I would like to know if there's a way to pass a date and based on it, the query returns the next 7 dates based on the passed date... for example:
    select (I don't know that put here) from dual where date > '2012-10-15'
    Date
    2012-10-16
    2012-10-17
    2012-10-18
    2012-10-19
    2012-10-20
    2012-10-21
    2012-10-22
    I really appreciate any help
    Thanks

    Sven W. wrote:
    I don't like connect by. That is fair enough, it is just your opinion.
    It is slow and shouldn't be used for real production code.This however, is absolute garbage.
    Changing the query to return 10,000 dates takes a little over 1s
    SQL> select date '2012-10-15' + level - 1 from dual
      2  connect by level <= 10000;
    <snip>
    28-FEB-40
    29-FEB-40
    01-MAR-40
    10000 rows selected.
    Elapsed: 00:00:01.26>
    In your case you can simply do this
    with inputdata as (select to_date('2012-10-15','yyyy-mm-dd') startday from dual)
    select startday+1 from inputdata union all
    select startday+2 from inputdata union all
    select startday+3 from inputdata union all
    select startday+4 from inputdata union all
    select startday+5 from inputdata union all
    select startday+6 from inputdata union all
    select startday+7 from inputdata ;
    Running your alternative for 10,000 dates took quite some time to create, needed to be put in a file to execute and has been running now for about 15 minutes
    select date '2012-10-15' + 1 from dual union all
    select date '2012-10-15' + 2 from dual union all
    <snip>
    select date '2012-10-15' + 9996 from dual union all
    select date '2012-10-15' + 9997 from dual union all
    select date '2012-10-15' + 9998 from dual union all
    select date '2012-10-15' + 9999 from dual union all
    select date '2012-10-15' + 10000 from dual
    ;It is much more code, takes more time to write, is proven to be incredibly slow and shouldn't be used for real production code.
    Edited by: 3360 on Oct 5, 2012 9:52 AM
    Sorry it took only 12 minutes, it seemed a lot longer when waiting for it
    29-FEB-40
    01-MAR-40
    01-MAR-40
    02-MAR-40
    10000 rows selected.
    Elapsed: 00:12:01.35

Maybe you are looking for

  • GetTypeInfo

    We are doing some work around database independence, and as part of this we use meta-data via JDBC to support this. Initially however, as part of our investigation to see what types are supported in Oracle, we used the java.sql.DatabaseMetaData.getTy

  • Error opening Georaptor on SQL Developer 1.5

    Hello Everyone, I was just wondering if anyone has been able to use the Georaptor extension (http://georaptor.sourceforge.net) with SQL Dev 1.5? The geoRaptor menus come up, but so far I have not been able to add any "Add Layer" to the Spatial view.

  • Collaborator on Document Level vs. Class Level Access

    Hi Guys, I am wondering what happens when you add a user as a collaborator / reviewer to a document that has not been given authorization on class level, ie. the user has no access to auctions but I am adding him/ her to an auction as a collaborator?

  • Spectacular Spectral Crash - Not Happy

    I was happily basking in the satisfying process of cleaning up some audio tracks using spectral view and the marquee tool and extolling the virtue of this wonderous facility - truly a miracle tool - when it happened! I selected an area of interest wi

  • Auto-recovery mechanism(S) of oracle database???

    Hi, I read that if the database is closed abnormally, say to power failure, then once the database is re-opened, the RECO foreground process will recover all transactions that were in-doubt, that is neither commited or rolled back, how does it do???