Schema refresh.

I have a database server that supports a number of seperate applications, each application having its own schema and tablespace(s). to refresh the test database from live (or dev from test) we currently use transportable tablespaces to copy the data, and a datapump export/import to take care of objects held in the data dictionary (e.g. packages/procedures).
Recently we hit a problem as the deveopers have started to use oracle text, when retesting our refresh method we have found that the users stop lists do not get recreated in the database being refreshed. thjis appears to be true of any similar pieces of information held in the CTSSYS schema, as the current usage of this feature is quite limited this isn't a problem at the moment as the stop lists canbe recreated manually what I would like to know is has anyone come accross similar features where a datapump export/import will not transfer the objects/information.
Chris

Hi ,
The main theme of the Initial Load is to Synchronize the Source and Target data.
Before executing an initial load, disable DDL extraction and replication. DDL processing is controlled by the DDL parameter in the Extract and Replicat parameter files.
Please refer the below link., The pre-requisites are clearly mentioned in this.
http://docs.oracle.com/goldengate/1212/gg-winux/GWUAD/wu_initsync.htm
in this check the sub topic ----> 16.1.2 Prerequisites for Initial Load
Regards,
Veera

Similar Messages

  • Schema refresh issue

    Hi Everyone,
                         can anybody do a correction ?  iam having the following query, there's some error in it, after executing schema refresh using export & import , to get count of database objects comparison to be done,
    -- SELECT 'TRUNCATE TABLE '||OWNER||'.'||TABLE_NAME||' ;' FROM DBA_TABLES WHERE OWNER='PRICING' order by  TABLE_NAME;
    SQL> SELECT 'select count(*) from  '||OWNER||'.'||TABLE_NAME||' ;' FROM DBA_TABLES WHERE OWNER='PRICING' order by  TABLE_NAME;
         the output expected  was to display each table name in a schema  following with corresponding number of records to be displayed, but it wasn't showing correctly,

    Hi,
    What Error you get , can you please share the Error log here.
    This is Dynamic Query so it will generate the sql statement  like
    select count(*) from  SCOTT.BONUS ;
    select count(*) from  SCOTT.DEPT ;
    select count(*) from  SCOTT.EMP ;
    You can try something this
    spool count.sql
    SELECT 'select count(*) from  '||OWNER||'.'||TABLE_NAME||' ;' FROM DBA_TABLES WHERE OWNER='PRICING' order by  TABLE_NAME;
    spool off
    @count.sql
    Hope this Help

  • Schema refresh in the test DB

    Dear All/Aman,
    I have one database which having two schema, every day I am taking the schema level export and importing the same to my test server.
    Its taking time to import everyday inspite there is little changes in the schema.
    can I use the INCTYPE for the icremental import to reduce the time.
    the intention is to refresh the test schema every day.
    could you please provide some suggestion to minimize the Schema refreshment time.
    Is incremental impot available in 10g for schema level or do i need to go for RMAN.
    Please provide your input as earliest.

    The script which I have posted is just an example which i thought to going through that.
    the setup is like that: everyday from the production DB there is two schema getting export and import to the test DB.
    The test db is UAT where user needs to do any DML operation on that, however the business requirement is needs to get refreshed these two schema every day at certain time.
    So, my requirement is : whatever production DB is populating the data for the partilcular table in schema for the day it should reflect the same in test after importing. No matter what the user did in test DB
    But I don't want to import the full schema always as itsl taking to much time, so finding something like incremental import where table should import if its find any changes has happened in that else it should skip.
    hope you could understand my requirement.
    Let me know if you need any further details to help me.
    Regards,

  • Schema Refresh using exp/imp.

    Hello All,
    I want to perform Schema Refresh of SAMPLE user from producation to Testing envrionment using export/import.
    Cud u plz tell what is the command to perform it ?
    Also Cud anyone plz tell me whether same user(SAMPLE) in Test environment gets dropped before Import done.
    Can i Perform the exp/imp using sys/system or user SAMPLE?

    tvenkatesh07 wrote:
    Hello All,
    I want to perform Schema Refresh of SAMPLE user from producation to Testing envrionment using export/import.
    Cud u plz tell what is the command to perform it ?
    Also Cud anyone plz tell me whether same user(SAMPLE) in Test environment gets dropped before Import done.
    Can i Perform the exp/imp using sys/system or user SAMPLE?If you're runnnig 10g, then use Data Pump and read the documentation:
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#i1007466
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_import.htm#i1007653
    My Oracle Video Tutorials - http://kamranagayev.wordpress.com/oracle-video-tutorials/

  • Schema Refresh activity

    Hi Gurus,
    I am doing a schema refresh activity from Production db to Test db.
    In PRODUCTION, I have a schema called LMS which has default tablespace as LMS and temporary TEMP.
    In TEST schema is LMSTEST and it;s default tablespace and temporary tablespace is LMSTEST and TEMP respectively.
    But;there is also a tablespace called LMS in TEST database.
    I have dropped the user from TEST
    created a user with same name
    given default and temporary tablespace as LMSTEST and TEMP
    given default roles to that users
    given system privileges as well...
    My poblem is while importing,my data is getting inserted into LMS where as it should go in LMSTEST as i made it default tablespace. Why it has happpened?????
    I have also made quota zero on LMS in TEST database ,but still the data is getting inserted into LMS in tablespace and not in LMSTEST tablespace.....
    Edited by: user13299764 on Aug 4, 2010 2:51 AM

    In export/import utility the data to the target database goes to the same tablespace while doing import as it was in the source database tablespace.In your case LMS tablespace.so follow the below steps.
    1.Either assign LMS as default tablespace to LMSTEST user.
    or
    2.After the import is over move all tables to the LMSTEST tablespace and rebuild the index on LMSTEST tablespace.
    or
    3.Take export using datapump and while import use remap_tablespace option.

  • How to Enhance or Speedup Datapump Schema Refresh

    Hi All,
    My environment is Oracle 10g Rel2 on RAC (Linux 5), central Storage... We are doing Datapump Schema refresh(Export) with parallel and excluding only grants; parameters. Is there any possibility to enhace the import process by importing metadata and data separately. We tried Parallel option, but still import takes much longer to complete. Is it anything to do with breaking import in two parts. But i dont know how to break the import task.
    Kindly reply with appropriate examples..Would appreciate your response.
    Thanks,
    Abdul Mannan

    Hi Hitgon,
    Please find the following details:
    how many physical cpu/core? - 16 CPU
    whether the os is 64bit/32bit? - 64 bit (64 bit Grid Servers)
    how many physical memory in os? - 12 Servers with (256 GB) and 4 Servers (64GB)
    how many memory assign to db? - SGA - 12GB & PGA - 6GB
    what the size of database? more than 1 TB
    what the size of schema you are going to import? around 700 - 950 GB
    import process running from remote client/server or it's the server import - we are initiating the job on server using (Putty).
    Using the following Export and Import Commands:
    Datapump Export parfile:
    JOB_NAME=Schema1_20120621_100122_27198
    DIRECTORY=DPUMP_Export
    PARALLEL=4
    DUMPFILE=Schema1_20120621_100122_27198_%U.dmp
    LOGFILE=DPUMP_EXPORT:DPEXPORT_20120621_100122_27198.log
    CONTENT=ALL
    FILESIZE=10000M
    Datapump Import parfile:
    SCHEMAS=Schema1
    JOB_NAME=Schema1_20120118_213551
    DIRECTORY=DPUMP_EXPORT
    PARALLEL=4
    DUMPFILE=SCHEMA1_20120114_025007_19273_%U.dmp
    LOGFILE=DPUMP_EXPORT:DPIMP_20120118_213551.log
    REMAP_TABLESPACE=Schema1DATA:Schema2DATA
    EXCLUDE=GRANT
    EXCLUDE=STATISTICS
    Would appreciate your reply..
    Thanks,
    Abdul Mannan

  • Query to find when the given schema last refreshed ?

    I need to find out the given database schema, when did last get refresh using impdp or import or SQL Loader?

    If schema refresh is carried out by dropping schema then Created column in DBA_USERS will show your answer.
    Else if its carried out by dropping the tables, then you can check the created and last_ddl_time in DBA_OBJECTS
    Select username,created from dba_users where username='U1';
    SELECT OWNER,OBJECT_NAME,CREATED,LAST_DDL_TIME FROM DBA_OBJECTS WHERE OWNER='U1';
    Please keep forum clean by Marking your Post as Answered or Helpful if Your question is answered.
    Thanks & Regards,
    SID
    (StepIntoOracleDBA)
    Email : [email protected]
    http://stepintooracledba.blogspot.in/
    http://www.stepintooracledba.com/

  • Trying to refresh schema for BHOLD management agent. Getting error

    Greetings,
    I'm piloting the BHOLD suite to see if it will meet our company's needs.
    I added some attributes in the BHOLD core configuration pages (web site).
    Now in the FIM sync service, I need to refresh the schema to see them. The Schema refresh fails with the following error logged in the event log. Any ideas?
    -Doug
    Log Name:      Application
    Source:        FIMSynchronizationService
    Date:          1/15/2014 8:42:27 AM
    Event ID:      6801
    Task Category: Server
    Level:         Error
    Keywords:      Classic
    User:          N/A
    Computer:      xxxxx
    Description:
    The extensible extension returned an unsupported error.
     The stack trace is:
     "System.ObjectDisposedException: Cannot access a disposed object.
    Object name: 'WindowsIdentityImpersonationFactory'.
       at Microsoft.AccessManagement.BHOLDConnector.Context.WindowsIdentityImpersonationFactory.CreateImpersonation()
       at Microsoft.AccessManagement.BHOLDConnector.DataAccess.IntegratedSecurityDataAccess..ctor(String serverName, String databaseName, String username, String password, String domain)
       at Microsoft.AccessManagement.BHOLDConnector.BHOLDConnector.GetDataAccess(KeyedCollection`2 configParameters)
       at Microsoft.AccessManagement.BHOLDConnector.BHOLDConnector.GetSchema(KeyedCollection`2 configParameters)
    Forefront Identity Manager 4.1.3114.0"
    Event Xml:
    <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
      <System>
        <Provider Name="FIMSynchronizationService" />
        <EventID Qualifiers="49152">6801</EventID>
        <Level>2</Level>
        <Task>3</Task>
        <Keywords>0x80000000000000</Keywords>
        <TimeCreated SystemTime="2014-01-15T14:42:27.000000000Z" />
        <EventRecordID>110324</EventRecordID>
        <Channel>Application</Channel>
        <Computer>xxxxx</Computer>
        <Security />
      </System>
      <EventData>
        <Data>System.ObjectDisposedException: Cannot access a disposed object.
    Object name: 'WindowsIdentityImpersonationFactory'.
       at Microsoft.AccessManagement.BHOLDConnector.Context.WindowsIdentityImpersonationFactory.CreateImpersonation()
       at Microsoft.AccessManagement.BHOLDConnector.DataAccess.IntegratedSecurityDataAccess..ctor(String serverName, String databaseName, String username, String password, String domain)
       at Microsoft.AccessManagement.BHOLDConnector.BHOLDConnector.GetDataAccess(KeyedCollection`2 configParameters)
       at Microsoft.AccessManagement.BHOLDConnector.BHOLDConnector.GetSchema(KeyedCollection`2 configParameters)
    Forefront Identity Manager 4.1.3114.0</Data>
      </EventData>
    </Event>

    This version of the sync engine has a known issue with ECMAs when attempting to refresh the schema. The following KB article, for build 4.1.3441 of FIM 2010 R2, describes this issue. I believe it is synchronization service issue 7.
    http://support.microsoft.com/kb/2832389/en-us
    The builds of FIM R2 are cumulative. This is actually the latest version, build 4.1.3496. This version includes all of the fixes from previous hotfixes, including the above.
    http://support.microsoft.com/kb/2906832
    I had this same problem and after installing the 4.1.3441, the schema refresh on the BHOLD MA worked as expected.
    P.S.
    Also, I wouldn't populate all of the attributes on export. Several of these should have been hidden but weren't. If you populate bholdLastname or bholdFirstname, it can overwrite the value for bholdDescription, which is an important attribute. Also, by default,
    there are no attribute in BHOLD that these two actually sync up with.....................

  • Read-Through Caching with expiry-delay and near cache (front scheme)

    We are experiencing a problem with our custom CacheLoader and near cache together with expiry-delay on the backing map scheme.
    I was under the assumption that it was possible to have an expiry-delay configured on the backing-scheme and that the near cache object was evicted when backing object was evicted. But according to our tests we have to put an expiry-delay on the front scheme too.
    Is my assumption correct that there will not be automatic eviction on the near cache (front scheme)?
    With this config, near cache is never cleared:
                 <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme />
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>With this config (added expiry-delay on front-scheme), near cache gets cleared.
            <near-scheme>
                      <scheme-name>config-part</scheme-name>
                      <front-scheme>
                            <local-scheme>
                                 <expiry-delay>15s</expiry-delay>
                            </local-scheme>
                      </front-scheme>
                      <back-scheme>
                            <distributed-scheme>
                                  <scheme-ref>config-part-distributed</scheme-ref>
                            </distributed-scheme>
                      </back-scheme>
                <autostart>true</autostart>
                </near-scheme>
                <distributed-scheme>
                      <scheme-name>config-part-distributed</scheme-name>
                      <service-name>partDistributedCacheService</service-name>
                      <thread-count>10</thread-count>
                      <backing-map-scheme>
                            <read-write-backing-map-scheme>
                                  <read-only>true</read-only>
                                  <scheme-name>partStatusScheme</scheme-name>
                                  <internal-cache-scheme>
                                        <local-scheme>
                                              <scheme-name>part-eviction</scheme-name>
                                              <expiry-delay>30s</expiry-delay>
                                        </local-scheme>
                                  </internal-cache-scheme>
                                  <cachestore-scheme>
                                        <class-scheme>
                                              <class-name>net.jakeri.test.PingCacheLoader</class-name>
                                        </class-scheme>
                                  </cachestore-scheme>
                                  <refresh-ahead-factor>0.5</refresh-ahead-factor>
                            </read-write-backing-map-scheme>
                      </backing-map-scheme>
                      <autostart>true</autostart>
                      <local-storage system-property="tangosol.coherence.config.distributed.localstorage">true</local-storage>
                </distributed-scheme>

    Hi Jakkke,
    The Near Cache scheme allows to have configurable levels of cache coherency from the most basic expiry based cache to invalidation based cache to data versioning cache depending on the coherency requirements. The Near Cache is commonly used to achieve the performance of replicated cache without losing the scalability aspects of replicated cache and this is achieved by having a subset of data (based on MRU or MFU) in the <front-scheme> of the near cache and the complete set of data in the <back-scheme> of near cache. The <back-scheme> updates can automatically trigger events to invalidate the entries in the <front-scheme> based on the invalidation strategy (present, all, none, auto) configured for the near cache.
    If you want to expire the entries in the <front-scheme> and <back-scheme>, you need to specify an expiry-delay on both the schemes as mentioned by you in the last example. Now if you are expiring the items in the <back-scheme> for the reason that they get loaded again from the cache-store but the <front-scheme> keys remain same (only the values should be refreshed from the cache store) then you need not set the expiry-delay on the <front-scheme> rather mention the invalidation-strategy as present. But if you want to have a different set of entries in <front-scheme> after a specified expiry delay then you need to mention it in the <front-scheme> configuration.
    The near cache has the capability to keep front scheme and back scheme data in sync but the expiry of entries is not synced. Always, front-scheme is a subset of back-scheme.
    Hope this helps!
    Cheers,
    NJ

  • Access to schema objects

    Guys,
    I am on 10g R2 and have this requirement.
    We refresh our QA environment from PROD every day ( exp/imp and schema refresh ). App team now have a requirement where by they want to create few objects including tables every day and load those tables with static data. This in a way doesn't need to be refreshed every day.. So, could place this in another schema. But we do not want to grant access on that schema to the user.
    Alternatively, we can get them to send the us a script, that can be run as POST refresh script on completion of the refresh. But this would mean that any changes to the script will involve us copying the file, which could be a hassle.
    I am wondering, if there is better way to handle this.. BTW, the app team do not have access to the database host and we don't plan to grant it either..
    What is the best alternative ?

    You can use DBMS_METADATA and EXECUTE IMMEDIATE
    SQL> create user u1 identified by u1;
    User created.
    SQL> create or replace type u1.type1 as object (a number, b date);
      2  /
    Type created.
    SQL> declare
      2  stat varchar2(32000):= dbms_metadata.get_ddl('TYPE','TYPE1','U1');
      3  begin
      4    execute immediate replace(stat,'"U1".','');
      5  end;
      6  /
    PL/SQL procedure successfully completed.
    SQL> desc type1
    Name                                                        Null?    Type
    A                                                                    NUMBER
    B                                                                    DATEYou can loop on all schema objects selected from dba_objecs to manage all objects with a single statement:
    SQL> create function u1.f1 return number is
      2  begin
      3    return 0;
      4  end;
      5  /
    Function created.
    SQL> declare
      2  stat varchar2(32000);
      3  begin
      4    for r in (select object_type, object_name from dba_objects where owner='U1') loop
      5      stat := dbms_metadata.get_ddl(r.object_type,r.object_name,'U1');
      6      dbms_output.put_line(stat);
      7      execute immediate replace(stat,'"U1".','');
      8    end loop;
      9  end;
    10  /
      CREATE OR REPLACE TYPE "U1"."TYPE1" as object (a number, b date);
      CREATE OR REPLACE FUNCTION "U1"."F1" return number is
    begin
      return 0;
    end;
    PL/SQL procedure successfully completed.
    SQL> desc f1
    FUNCTION f1 RETURNS NUMBER
    SQL> desc type1
    Name                                                        Null?    Type
    A                                                                    NUMBER
    B                                                                    DATEMax
    [My Italian Oracle blog| http://oracleitalia.wordpress.com/2010/02/07/aggiornare-una-tabella-con-listruzione-merge/]

  • Database Refresh:  Save Job Components?

    Our DBAs will be performing a database refresh of our dev/test environments based on a current Production model. To account for differences in our existing jobs/chains between Production and our other environments, it would be ideal to save off the existing objects in dev/test before the refresh. Once the refresh is complete, we could then restore the jobs/chains.
    Has anyone experienced similar?
    - If a schema refresh is done of the particular owner of the jobs, will the underlying jobs, chains, programs, steps, etc. be included in the schema refresh?
    - If not, where, specifically, are the scheduler objects stored so they can be copied?
    Any insight would be appreciated.

    Thanks for the caution. After we performed the export I removed the jobs owned by that schema user and was able to rebuild using the DDL. I did notice that when the chain rules were created, the 'condition' syntax was converted. For example, instead of my first rule having a condition of 'TRUE', Oracle converted it to 1=1. Instead of 'STEP_1 Completed and STEP_2 Completed' it converted to
    ':STEP_1.STATE = 'COMPLETED' , etc. I don't think this should really be an issue.
    However, the real kicker is that the chain rule 'action' came back null. This includes removing actions such as 'START "STEP_3" ', and even my 'END' action. Is that to what you are referring? Not sure if this is a bug and/or if there is a workaround?

  • Unable to get refresh-ahead functionality working

    We are attempting to implement a cache with a refresh ahead so that the calling code never experience any latency due to a synchronous load from the cache loader that happens if requesting an object that has expired (so just standard refresh ahead as documented). I can't seem to get the configuration to recognize the refresh ahead setting at all - it behaves the same way regardless of whether or not the <refresh-ahead-factor> setting is included, and never does an asynchronous load. All the other settings in the configuration work as expected. I have included the configuration and what I expect the behaviour to be and what it currently is.
    Is there anything else in other configuration files, etc that needs to set up to enable asynchronous loads via the refresh-ahead-factor? Note I have tried upping the worker threads via the thread-count in tangosol-coherence-override.xml (no difference).
    One thing I couldn't seem to find in the documents is when the asynchronous load will occur. I assume it is near real time after a triggering 'get' after the refresh-ahead-factor, but if not that could explain the behaviour. Is there some setting that controls this, or does it happen as soon as a worker thread gets to it?
    In my tests I am running a single cluster node on my local machine (version 3.3 Grid Edition in Development mode). Note the 20second expiry and .5 refresh-ahead factor is just for test purposes to see behaviour, in production it will be 12hours with a higher refresh factor such as .75.
    Current behaviour (appears to be standard read-through and ignore the refresh-ahead-factor):
    - request for an object that does not exist in cache blocks the calling code until it is loaded and put into the cahce via the cache loader
    - request for an object that exists before the exiry period has passed (before 20 seconds since load in this configuration) since it was loaded returns the object from cache.
    - request for an object that exists after the exiry period has passed (after 20 seconds since load in this configuration) blocks the calling code until it is loaded and put into the cahce via the cache loader.
    - No requests ever appear to trigger asynchronous loads via the cache loader
    Expected behaviour, given the .5 refresh-ahead-factor
    - request for an object that does not exist in cache blocks the calling code until it is loaded and put into the cahce via the cache loader (same as above)
    - request for an object that exists before the exiry period has passed (before 20 seconds since load in this configuration) since it was loaded returns the object from cache, and triggers an asynchronous reload of the object via the cache loader if requested after the refresh ahead factor. So in this example I would expect that if I requested an object out of the cache between 10 - 20 seconds after it was loaded, it would return it from cache immediately and trigger an asynchronous reload of this object via the cache loader.
    - request for an object that exists in cache but has not been requested during the 10-20second period blocks the calling code until it is loaded and put into the cahce via the cache loader.
    Here is the entry from our coherence-cache-config.xml
    <distributed-scheme>
    <scheme-name>rankedcategories-cache-all-scheme</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <scheme-name>rankedcategoriesLoaderScheme</scheme-name>
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>rankedcategories-eviction</scheme-ref>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.abe.cache.coherence.rankedcategories.RankedCategoryCacheLoader</class-name>
    </class-scheme>
    </cachestore-scheme>
    <refresh-ahead-factor>.5</refresh-ahead-factor>
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>
    <local-scheme>
    <scheme-name>rankedcategories-eviction</scheme-name>
    <expiry-delay>20s</expiry-delay>
    </local-scheme>

    Hi Leonid,
    Yes, it works as expected. refresh-ahead works approximately like this :
    public Object get(Object oKey) {
    Entry entry = getEntry();
    if (entry != null and entry is about to expire) {
    schedule refresh();
    return entry.getValue();
    If you want to reload infrequently accessed entries you can do something like this (obviously this will not work if cache is size limited) :
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
        <caching-scheme-mapping>
            <cache-mapping>
                <cache-name>fdg*</cache-name>
                <scheme-name>fdg-cache-all-scheme</scheme-name>
            </cache-mapping>
        </caching-scheme-mapping>
        <caching-schemes>
            <distributed-scheme>
                <scheme-name>fdg-cache-all-scheme</scheme-name>
                <service-name>DistributedCache</service-name>
                <backing-map-scheme>
                    <!--
                    Read-write-backing-map caching scheme.
                    -->
                    <read-write-backing-map-scheme>
                        <scheme-name>categoriesLoaderScheme</scheme-name>
                        <internal-cache-scheme>
                            <local-scheme>
                                <expiry-delay>10s</expiry-delay>
                                <flush-delay>2s</flush-delay>
                                <listener>
                                        <class-scheme>
                                            <class-name>com.sgcib.fundingplatform.coherence.ReloadListener</class-name>
                                            <init-params>
                                                <init-param>
                                                    <param-type>string</param-type>
                                                    <param-value>{cache-name}</param-value>
                                                </init-param>
                                                <init-param>
                                                    <param-type>com.tangosol.net.BackingMapManagerContext</param-type>
                                                    <param-value>{manager-context}</param-value>
                                                </init-param>
                                            </init-params>
                                        </class-scheme>
                                </listener>
                            </local-scheme>
                        </internal-cache-scheme>
                        <cachestore-scheme>
                            <class-scheme>
                                <class-name>com.sgcib.fundingplatform.coherence.DBCacheStore</class-name>
                            </class-scheme>
                        </cachestore-scheme>
                        <refresh-ahead-factor>0.5</refresh-ahead-factor>
                    </read-write-backing-map-scheme>
                </backing-map-scheme>
                <autostart>true</autostart>
            </distributed-scheme>
            <local-scheme>
                <scheme-name>categories-eviction</scheme-name>
                <expiry-delay>10s</expiry-delay>
                <flush-delay>2s</flush-delay>
            </local-scheme>
        </caching-schemes>
    </cache-config>
    package com.sgcib.fundingplatform.coherence;
    import com.tangosol.net.BackingMapManagerContext;
    import com.tangosol.net.DefaultConfigurableCacheFactory;
    import com.tangosol.net.cache.CacheEvent;
    import com.tangosol.util.MapEvent;
    import com.tangosol.util.MultiplexingMapListener;
    import java.util.Map;
    import java.util.concurrent.ExecutorService;
    import java.util.concurrent.Executors;
    import java.util.concurrent.ThreadFactory;
    * dimitri Nov 26, 2008
    public class ReloadListener extends MultiplexingMapListener
        String                                  m_sCacheName;
        DefaultConfigurableCacheFactory.Manager m_bmmManager;
        ExecutorService m_executorService = Executors.newSingleThreadExecutor(new ThreadFactory()
        public Thread newThread(Runnable runnable)
            Thread thread = new Thread(runnable);
            thread.setDaemon(true);
            return thread;
        public ReloadListener(String sCacheName, BackingMapManagerContext ctx)
            m_sCacheName = sCacheName;
            m_bmmManager =
                (DefaultConfigurableCacheFactory.Manager) ctx.getManager();
        protected void onMapEvent(MapEvent evt)
            if (evt.getId() == MapEvent.ENTRY_DELETED && ((CacheEvent) evt).isSynthetic())
                m_executorService.execute(
                    new ReloadRequest(m_bmmManager.getBackingMap(m_sCacheName), evt.getKey()));
        public void finalize()
            m_executorService.shutdownNow();
        class ReloadRequest implements Runnable
            Map    m_map;
            Object m_oKey;
            public ReloadRequest(Map map, Object oKey)
                m_map  = map;
                m_oKey = oKey;
            public void run()
                m_map.get(m_oKey);
    package com.sgcib.fundingplatform.coherence;
    import java.util.Collection;
    import java.util.Map;
    import java.util.Date;
    import com.tangosol.net.cache.CacheStore;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.util.Base;
    public class DBCacheStore extends Base implements CacheStore
         * Return the value associated with the specified key, or null if the
         * key does not have an associated value in the underlying store.
         * @param oKey key whose associated value is to be returned
         * @return the value associated with the specified key, or
         *         null if no value is available for that key
        public Object load(Object key)
            CacheFactory.log("load(" + key + ") invoked on " + Thread.currentThread().getName(),
                CacheFactory.LOG_DEBUG);
            return new Date().toString();
         * Return the values associated with each the specified keys in the
         * passed collection. If a key does not have an associated value in
         * the underlying store, then the return map will not have an entry
         * for that key.
         * @param colKeys a collection of keys to load
         * @return a Map of keys to associated values for the specified keys
        public Map loadAll(Collection colKeys)
            throw new UnsupportedOperationException();
        public void erase(Object arg0)
    // TODO Auto-generated method stub
        public void eraseAll(Collection arg0)
    // TODO Auto-generated method stub
        public void store(Object arg0, Object arg1)
    // TODO Auto-generated method stub
        public void storeAll(Map arg0)
    // TODO Auto-generated method stub
        // Test harness
        public static void main(String[] asArg)
            try
                NamedCache cache = CacheFactory.getCache("fdg-test");
                cache.get("bar"); // this will not be requested and reloaded by the
                                  // listener
                while (true)
                    CacheFactory.log("foo= " + cache.get("foo"), CacheFactory.LOG_DEBUG);
                    Thread.sleep(1000l);
            catch (Throwable oops)
                err(oops);
            finally
                CacheFactory.shutdown();
        }Regards,
    Dimitri

  • SYS, SYSTEM and SYSAUX when full database refresh.

    I took full export from database using below command
    expdp "'/ as sysdba'" full=Y directory=DPUMP_DIR dumpfile=expdp_11032011.dmp logfile=expdp_11032011.log Now, I need to import this file to an other database.
    When do schema refresh we usually drop all the object in that schema and start refresh, but when doing fullback up, do we need to drop all user?
    what about sys, system and sysaux user?

    user3636719 wrote:
    So, the tables in the SYS and SYSTEM will remain same when we refresh?
    Structure will not be modified but contents will be automatically modified when DLL is executed when importing.
    And do we have to drop other user before we import?Applications schemas that you have created should be dropped. In general don't modify any schema that is directly managed by Oracle such as SYS or SYSTEM or any schema used by some database option like Oracle Text, Oracle Spatial, etc.

  • Refresh Ahead Cache with JPA

    I am trying to use Refresh-ahead caching with JPACacheStore. My config backig-map config is given below. I am using the same JPA example as given in the Coherence tutorial. The cache is only loading the data from the when the server starts. When i change the data in the DB, it is not reflecting in the cache. I am not sure I am doing the right thing. Need your help!!
    <backing-map-scheme>
                        <read-write-backing-map-scheme>
                             <!--Define the cache scheme-->
                             <internal-cache-scheme>
                                  <local-scheme>
                                       *<expiry-delay>1m</expiry-delay>*
                                  </local-scheme>
                             </internal-cache-scheme>
                             <cachestore-scheme>
                                  <class-scheme>
                                       <class-name>com.tangosol.coherence.jpa.JpaCacheStore</class-name>
                                       <init-params>
                                            <!--
                                            This param is the entity name
                                            This param is the fully qualified entity class
                                            This param should match the value of the
                                            persistence unit name in persistence.xml
                                            -->
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>com.oracle.handson.{cache-name}</param-value>
                                            </init-param>
                                            <init-param>
                                                 <param-type>java.lang.String</param-type>
                                                 <param-value>JPA</param-value>
                                            </init-param>
                                       </init-params>
                                  </class-scheme>
                             </cachestore-scheme>
                             *<refresh-ahead-factor>0.5</refresh-ahead-factor>*
                        </read-write-backing-map-scheme>
                   </backing-map-scheme>
    Thanks in advance.
    John

    I guess this is the answer
    Sorry for the dumb question :)
    Note: For use with Partitioned (Distributed) and Near cache
    topologies: Read-through/write-through caching (and variants) are
    intended for use only with the Partitioned (Distributed) cache
    topology (and by extension, Near cache). Local caches support a
    subset of this functionality. Replicated and Optimistic caches should
    not be used.

  • What does this scheme-name stand for ?

    What does this <scheme-name> stand for? Does it stand for the name of this <read-write-backing-map-scheme> ?
    <read-write-backing-map-scheme>
    *<scheme-name>categoriesLoaderScheme</scheme-name>*
    <internal-cache-scheme>
    <local-scheme>
    <scheme-ref>categories-eviction</scheme-ref>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>com.demo.cache.coherence.categories.CategoryCacheLoader</class-name>
    </class-scheme>
    </cachestore-scheme>
    <refresh-ahead-factor>0.5</refresh-ahead-factor>
    </read-write-backing-map-scheme>
    This excerpt is come form the following cache-config.xml file:
    <cache-config>
       <distributed-scheme>
          <scheme-name>categories-cache-all-scheme</scheme-name>
          <service-name>DistributedCache</service-name>
          <backing-map-scheme>
          <!--
          Read-write-backing-map caching scheme.
          -->
          <read-write-backing-map-scheme>
             <scheme-name>categoriesLoaderScheme</scheme-name>
             <internal-cache-scheme>
                <local-scheme>
                   <scheme-ref>categories-eviction</scheme-ref>
                </local-scheme>
             </internal-cache-scheme>
             <cachestore-scheme>
                <class-scheme>
                   <class-name>com.demo.cache.coherence.categories.CategoryCacheLoader</class-name>
                </class-scheme>
             </cachestore-scheme>
             <refresh-ahead-factor>0.5</refresh-ahead-factor>
          </read-write-backing-map-scheme>
          </backing-map-scheme>
          <autostart>true</autostart>
       </distributed-scheme>
        <!--
        Backing map scheme definition used by all the caches that require
        size limitation and/or expiry eviction policies.
        -->
       <local-scheme>
          <scheme-name>categories-eviction</scheme-name>
          <expiry-delay>20s</expiry-delay>
       </local-scheme>
    </cache-config>

    If you look at the documentation here [http://download.oracle.com/docs/cd/E14526_01/coh.350/e14509/appcacheelements.htm#BABEFGCG] you will see that it does indeed specify the name of the scheme. The other XML tags are explained too.
    JK

Maybe you are looking for

  • Can no longer copy files between internal drives or from burned DVD's

    This just started today as far as I can tell. We have 4 dual 2.7GHz G5's running 10.4.4. Two of them seem to work as normal. The other two won't let me drag/copy files from one of the internal drives to the desktop or the other internal. We have two

  • Table for Variants

    Hi, In our project, SAP User ids are being revamped and the user ids after the revamp will have a different nomenclature. Say for eg, today a user id is TKUABC, after the revamp it will be something else. Because of this there is an impact on batch j

  • 9iAS(9.0.3-the newest) on RHAS2.1

    hi,all.i met some trouble when i install 9iAS.i download for oracle page.and i have installed 9iDatabase on the server.i use another user 'ias' to install 9iAS(defferent ORA_HOME). firstly,all are right.but...when it goes to the last step,just ...con

  • Can I run Premiere Elements 10 and Photoshop Elements 10 on my PC

    My computer has a dual core pentium E2140 processor running at 1.6 Ghz, 2 GB RAM and an NVidia GForce 8400GS graphics card with 512 MB.  Will Premiere Elements 10 run OK on it for SD video and will Photoshop Elements 10 run on it OK

  • IPod in car question

    I purchased a new vehicle that has a auxillary port that I can use to play my iPod.. However, I have been noticing that my ipod's battery drains really fast while listening to it in this mode.. Like about 2 to 3 hours.. I have a 5g ipod.... Could I g