Configure cache policy of a distributed cache
How do I configure the cache policy of a distributed cache, e.g. eviction policy, high units, expiry delay etc? Should I use com.tangosol.util.Cache instead of com.tangosol.util.SafeHashMap as the backing map of the distributed cache?
Hi Jin,
I have attached an example of the descriptor used to setup a Distributed Caching scheme. This example shows how to set up both a 'vanilla' Distributed Cache as well as how to setup a 'size-limited/auto-expiry' cache.
The 'HYBRID' local-scheme will automatically use the com.tangosol.net.cache.LocalCache implementation (a subclass of com.tangosol.util.Cache).
Later,
Rob Misek
Tangosol, Inc.
Coherence: Cluster your Work. Work your Cluster.<br><br> <b> Attachment: </b><br>distributed-cache-config.xml <br> (*To use this attachment you will need to rename 22.bin to distributed-cache-config.xml after the download is complete.)
Similar Messages
-
Need Help regarding initial configuration for distributed cache
Hi ,
I am new to tangosol and trying to setup a basic partitioned distributed cache ,But I am not being able to do so
Here is my Scenario,
My Application DataServer create the instance of Tangosolcache .
I have this config.xml set in my machine where my application start.
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<!--
Caches with any name will be created as default near.
-->
<cache-mapping>
<cache-name>*</cache-name>
<scheme-name>default-distributed</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<!--
Default Distributed caching scheme.
-->
<distributed-scheme>
<scheme-name>default-distributed</scheme-name>
<service-name>DistributedCache</service-name>
<backing-map-scheme>
<class-scheme>
<scheme-ref>default-backing-map</scheme-ref>
</class-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
<!--
Default backing map scheme definition used by all the caches that do
not require any eviction policies
-->
<class-scheme>
<scheme-name>default-backing-map</scheme-name>
<class-name>com.tangosol.util.SafeHashMap</class-name>
<init-params></init-params>
</class-scheme>
</caching-schemes>
</cache-config>
Now on the same machine I start a different client using the command
java -Dtangosol.coherence.distributed.localstorage=false -Dtangosol.coherence.cacheconfig=near-cache-config.xml -classpath
"C:/calypso/software/release/build" -jar ../lib/coherence.jar
The problem I am facing is
1)If I do not start the client even then my application server cache the data .Ideally my config.xml setting is set to
distributed so under no case it should cache the data in its local ...
2)I want to bind my differet cache on different process on different machine .
say
for e.g
machine1 should cache cache1 object
machine2 should cache cache2 object
and so on .......but i could not find any documentation which explain how to do this setting .Can some one give me example of
how to do it ....
3)I want to know the details of cache stored in any particular node how do I know say for e.g machine1 contains so and so
cache and it corresponding object values ... etc .....
Regards
MaheshHi Thanks for answer.
After digging into the wiki lot i found out something related to KeyAssociation I think what I need is something like implementation of KeyAssociation and that
store the particular cache type object on particular node or group of node
Say for e,g I want to have this kind of setup
Cache1-->node1,node2 as I forecast this would take lot of memory (So i assign this jvms like 10 G)
Cache2-->node3 to assign small memory (like 2G)
and so on ...
From the wiki documentation i see
Key Association
By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator may be used to indicate related entries, the partitioned cache service will ensure that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.
Do someone have any example of explaining how this is done in the simplest way .. -
Distributed Cache service stuck in Starting Provisioning
Hello,
I'm having problem with starting/stopping Distributed Cache service in one of the SharePoint 2013 farm servers. Initially, Distributed Cache was enabled in all the farm servers by default and it was running as a cluster. I wanted to remove it from all hosts
but one (APP server) using below PowerShell commands, which worked fine.
Stop-SPDistributedCacheServiceInstance -Graceful
Remove-SPDistributedCacheServiceInstance
But later I attempted to add the service back to two hosts (WFE servers) using below command and unfortunately one of them got stuck in the process. When I look at the Services on Server from Central Admin, the status says "Starting".
Add-SPDistributedCacheServiceInstance
Also, when I execute below script, the status says "Provisioning".
Get-SPServiceInstance | ? {($_.service.tostring()) -eq "SPDistributedCacheService Name=AppFabricCachingService"} | select Server, Status
I get "cacheHostInfo is null" error when I use "Stop-SPDistributedCacheServiceInstance -Graceful".
I tried below script,
$instanceName ="SPDistributedCacheService Name=AppFabricCachingService"
$serviceInstance = Get-SPServiceInstance | ? {($_.service.tostring()) -eq $instanceName -and ($_.server.name) -eq $env:computername}
$serviceInstance.Unprovision()
$serviceInstance.Delete()
,but it didn't work either, and I got below error.
"SPDistributedCacheServiceInstance", could not be deleted because other objects depend on it. Update all of these dependants to point to null or
different objects and retry this operation. The dependant objects are as follows:
SPServiceInstanceJobDefinition Name=job-service-instance-{GUID}
Has anyone come across this issue? I would appreciate any help.
Thanks!Hi ,
Are you able to ping the server that is already running Distributed Cache on this server? For example:
ping WFE01
As you are using more than one cache host in your server farm, you must configure the first cache host running the Distributed Cache service to allow Inbound ICMP (ICMPv4) traffic through the firewall.If an administrator removes the first cache host from
the cluster which was configured to allow Inbound ICMP (ICMPv4) traffic through the firewall, you must configure the first server of the new cluster to allow Inbound ICMP (ICMPv4) traffic through the firewall.
You can create a rule to allow the incoming port.
For more information, you can refer to the blog:
http://habaneroconsulting.com/insights/Distributed-Cache-Needs-Ping#.U4_nmPm1a3A
Thanks,
Eric
Forum Support
Please remember to mark the replies as answers
if they help and unmark them if they provide no help. If you have feedback for TechNet Subscriber Support, contact [email protected]
Eric Tao
TechNet Community Support -
Setup failover for a distributed cache
Hello,
For our production setup we will have 4 app servers one clone per each app server. so there will be 4 clones to a cluster. And we will have 2 jvms for our distributed cache - one being a failover, both of those will be in cluster.
How would i configure the failover for the distributed cache?
Thanksuser644269 wrote:
Right - so each of the near cache schemes defined would need to have the back map high-units set to where it could take on 100% of data.Specifically the near-scheme/back-scheme/distributed-scheme/backing-map-scheme/local-scheme/high-units value (take a look at the [Cache Configuration Elements|http://coherence.oracle.com/display/COH34UG/Cache+Configuration+Elements] ).
There are two options:
1) No Expiry -- In this case you would have to size the storage enabled JVMs to that an individual JVM could store all of the data.
or
2) Expiry -- In this case you would set the high-units a value that you determine. If you want it to store all the data then it needs to be set higher than the total number of objects that you will store in the cache at any given time or you can set it lower with the understanding that once that high-units is reached Coherence will evict some data from the cluster (i.e. remove it from the "cluster memory").
user644269 wrote:
Other than that - there is not configuration needed to ensure that these JVM's act as a failover in the event one goes down.Correct, data fault tolerance is on by default (set to one level of redundancy).
:Rob:
Coherence Team -
Programmatically use distributed caching in SharePoint 2013?
As per below TechNet article, developers are not allowed to use the AppFabric instance that comes with SharePoint 2013. The expectation is to deploy a seperate AppFabric cluster for custom applications.
http://technet.microsoft.com/en-us/library/jj219572.aspx#Important
Distributed caching was a long waited, nice SharePoint feature but it does not make sense to restrict developer access to this. We have a requirement to cache fairly smaller amount of data (must be distributed), but cannot deploy a separate Cache servers
for this. What are my options (other than System.Web.Caching)? Is there an API to safely cache smaller amount of data? What is the rationale behind restricting access to default AppFabric instance come with SharePoint?
Thanks in Advance,
AmalYes, it's just a thread-safe implementation of object cache and not a distributed cache. Probably the reason behind recommending to use a separate AppFabric cache cluster is that additional named caches of custom solutions and applications will
interfere with SharePoint named caches. Without a scope and priority defined, the named caches of custom solutions may start evicting SharePoint items if their usage is much higher than that of SharePoint ones.
AppFabric Caching and SharePoint: Concepts and Examples (Part
1)
This post is my own opinion and does not necessarily reflect the opinion or view of Slalom. -
my question is regarding SharePoint 2013 Farm topology. if i want go with Streamlined topology and having (2 distribute cache and Rm servers+ 2 front-end servers+ 2 batch-processing servers+ cluster sql server) then how distributed servers will
be connecting to front end servers? Can i use windows 2012 NLB feature? if i use NLB and then do i need to install NLB to all distributed servers and front-end servers and split-out services? What will be the configuration regarding my scenario.
Thanks in Advanced!For the Distributed Cache servers, you simply make them farm members (like any other SharePoint servers) and turn on the Distributed Cache service (while making sure it is disabled on all other farm members). Then, validate no other services (except for
the Foundation Web service due to ease of solution management) is enabled on the DC servers and no end user requests or crawl requests are being routed to the DC servers. You do not need/use NLB for DC.
Trevor Seward
Follow or contact me at...
  
This post is my own opinion and does not necessarily reflect the opinion or view of Microsoft, its employees, or other MVPs. -
Limitation on number of objects in distributed cache
Hi,
Is there a limitation on the number (or total size) of objects in a distributed cache? I am seeing a big increase in response time when the number of objects exceeds 16,000. Normally, the ServiceMBean.RequestAverageDuration value is in the 6-8ms range as long as the number of objects in the cache is less than 16K - I've run our application for weeks at a time without seeing any problems. However, once the number of objects exceeds the magic number of 16K the average request duration almost immediately jumps to over 100ms and continues to climb as more objects are added.
I'm fairly confident that the cache is indexed properly (as Dimitri helped us with that). Are there any configuration changes that could possibly help out here? We are using Coherence 3.3.
Any suggestions would be greatly appreciated.
Thanks,
JimHi Jim,
The results from the load test look quite normal, the system fairly quickly stabilizes at a particular performance level and remains there for the duration of the test. In terms of latency results, we see that the cache.putAll operations are taking ~45ms per bulk operation where each operation is putting 100 1K items, for cache.getAll operations we see about ~15ms per bulk operation. Additionally note that the test runs over 256,000 items, so it is well beyond the 16,000 limit you've encountered.
So it looks like your application are exhibiting different behavior then this test. You may wish to try to configure this test to behave as similarly to yours as possible. For instance you can set the size of the cache to just over/under 16,000 using the -entries parameter, set the size of the entries to 900 bytes using the -size parameter, and set the total number of threads per worker using the -threads parameter.
What is quite interesting is that at 256,000 1K objects the latency measured with this test is apparently less then half the latency you are seeing with a much smaller cache size. This would seem to point at the issue being related to or rooted in your test. Would you be able to provide a more detailed description of how you are using the cache, and the types of operations you are performing.
thanks,
mark -
HI,
We have a server (Server 1), on which the status of the Distributed cache was in "Error Starting" state.
While applying a service pack due to some issue we were unable to apply the path (Server 1) so we decided to remove the effected server from the farm and work on it. the effected server (Server 1) was removed from the farm through the configuration wizard.
Even after running the configuration wizard we were still able to see the server (Server 1) on the SharePoint central admin site (Servers in farm) when clicked, the service "Distributed cache" was still visible with a status "Error Starting",
tried deleting the server from the farm and got an error message, the ULS logs displayed the below.
A failure occurred in SPDistributedCacheServiceInstance::UnprovisionInternal. cacheHostInfo is null for host 'servername'.
8130ae9c-e52e-80d7-aef7-ead5fa0bc999
A failure occurred SPDistributedCacheServiceInstance::UnprovisionInternal()... isGraceFulShutDown 'False' , isGraceFulShutDown, Exception 'System.InvalidOperationException: cacheHostInfo is null at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
isGraceFulShutDown)'
8130ae9c-e52e-80d7-aef7-ead5fa0bc999
A failure occurred SPDistributedCacheServiceInstance::UnProvision() , Exception 'System.InvalidOperationException: cacheHostInfo is null at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.UnProvisionInternal(Boolean
isGraceFulShutDown) at Microsoft.SharePoint.DistributedCaching.Utilities.SPDistributedCacheServiceInstance.Unprovision()'
8130ae9c-e52e-80d7-aef7-ead5fa0bc999
We are unable to perform any operation install/repair of SharePoint on the effected server (Server 1), as the server is no longer in the farm, we are unable to run any powershell commands.
Questions:-
What would cause that to happen?
Is there a way to resolve this issue? (please provide the steps)
SatyamHi
try this:
http://edsitonline.com/2014/03/27/unexpected-exception-in-feedcacheservice-isrepopulationneeded-unable-to-create-a-datacache-spdistributedcache-is-probably-down/
Hope this helps. Please remember to click “Mark as Answer” on the post that helps you, and to click “Unmark as Answer” if a marked post does not actually answer your question. This can be beneficial to other community members reading the thread. -
Different distributed caches within the cluster
Hi,
i've three machines n1 , n2 and n3 respectively that host tangosol. 2 of them act as the primary distributed cache and the third one acts as the secondary cache. i also have weblogic running on n1 and based on some requests pumps data on to the distributed cache on n1 and n2. i've a listener configured on n1 and n2 and on the entry deleted event i would like to populate tangosol distributed service running on n3. all the 3 nodes are within the same cluster.
i would like to ensure that the data directly coming from weblogic should only be distributed across n1 and n2 and NOT n3. for e.g. i do not start an instance of tangosol on node n3. and an object gets pruned from either n1 or n2. so ideally i should get a storage not configured exception which does not happen.
The point is the moment is say CacheFactory.getCache("Dist:n3") in the cache listener, tangosol does populate the secondary cache by creating an instance of Dist:n3 on either n1 or n2 depending from where the object has been pruned.
from my understanding i dont think we can have a config file on n1 and n2 that does not have a scheme for n3. i tried doing that and got an illegalstate exception.
my next step was to define the Dist:n3 scheme on n1 and n2 with local storage false and have a similar config file on n3 with local-storage for Dist:n3 as true and local storage for the primary cache as false.
can i configure local-storage specific to a cache rather than to a node.
i also have an EJB deployed on weblogic that also entertains a getData request. i.e. this ejb will also check the primary cache and the secondary cache for data. i would have the statement
NamedCahe n3 = CacheFactory.getCache("n3") in the bean as well.Hi Jigar,
i've three machines n1 , n2 and n3 respectively that
host tangosol. 2 of them act as the primary
distributed cache and the third one acts as the
secondary cache.First, I am curious as to the requirements that drive this configuration setup.
i would like to ensure that the data directly coming
from weblogic should only be distributed across n1
and n2 and NOT n3. for e.g. i do not start an
instance of tangosol on node n3. and an object gets
pruned from either n1 or n2. so ideally i should get
a storage not configured exception which does not
happen.
The point is the moment is say
CacheFactory.getCache("Dist:n3") in the cache
listener, tangosol does populate the secondary cache
by creating an instance of Dist:n3 on either n1 or n2
depending from where the object has been pruned.
from my understanding i dont think we can have a
config file on n1 and n2 that does not have a scheme
for n3. i tried doing that and got an illegalstate
exception.
my next step was to define the Dist:n3 scheme on n1
and n2 with local storage false and have a similar
config file on n3 with local-storage for Dist:n3 as
true and local storage for the primary cache as
false.
can i configure local-storage specific to a cache
rather than to a node.
i also have an EJB deployed on weblogic that also
entertains a getData request. i.e. this ejb will also
check the primary cache and the secondary cache for
data. i would have the statement
NamedCahe n3 = CacheFactory.getCache("n3") in the
bean as well.In this scenario, I would recommend having the "primary" and "secondary" caches on different cache services (i.e. distributed-scheme/service-name). Then you can configure local storage on a service by service basis (i.e. distributed-scheme/local-storage).
Later,
Rob Misek
Tangosol, Inc. -
SCCM 2012 R2 Branch Cache - Distributed Cache mode
Hello to all, I understood the process to enabled Branch Cache - Distributed Cache mode in source server and how to enable clients to use it. Fine. I saw a text that states "
BranchCache management is integrated in the Configuration Manager console. For applications, you can configure BranchCache on a deployment type. For programs and software updates, you can configure the BranchCache settings on the deployment.
Does it mean that I also need to configure Branch Chache individually for applications, programs and updates so they can take advantage of it? If positive, how to do this "specific" configuration?
Regards, EEOC.The BranchCache option is checked by default, so you would only need to disable it for any deployments where you didn't want this functionality.
You can't enable BranchCache for Task Sequences though - but we have a script for that :-)
http://2pintsoftware.com/branchcache-enable-task-sequences-sccm/
Phil Wilcock http://2pintsoftware.com @2pintsoftware -
Distributed cache and Windows AppFabric
got some issues with the Dist. cache
followed this in order to remove the servicenstance
Run Get-SPServiceInstance to find the GUID in the ID section of the Distributed Cache Service that is causing an issue.
$s = get-spserviceinstance GUID
$s.delete()
It deletes fine but when I try to add:
Add-SPDistributedCacheServiceInstance
I get this:
Add-SPDistributedCacheServiceInstance : Could not load file or assembly 'Microsoft.ApplicationServer.Caching.Configuration, Version=1.0.0.0, Cul
ture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The system cannot find the file specified.
how do I resolve this?Hi JmATK,
Regarding this issue, we don’t recommend to delete without stopping any service gracefully, because there may be a data/stack that is still intact one and another.
The recommendation from Stacy is good, and if the issue is about zombie process that causing unresponsive or hang process, we may need to reset the process by re-attach database / farm.
Best regards.
Victoria
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected] -
On my application Server I am getting periodic entries under the General category
"Unable to write SPDistributedCache call usage entry."
The error is every 5 minutes exactly. It is followed by:
Calling... SPDistributedCacheClusterCustomProvider:: BeginTransaction
Calling... SPDistributedCacheClusterCustomProvider:: GetValue(object transactionContext, string type, string key
Calling... SPDistributedCacheClusterCustomProvider:: GetStoreUtcTime.
Calling... SPDistributedCacheClusterCustomProvider:: Update(object transactionContext, string type, string key, byte[] data, long oldVersion).
Sometimes this group of calls succeeds without an error and the sequence continues maybe for 3 iterations every 5 minutes. Then the error
"Unable to write SPDistributedCache call usage entry.""
happens again.
My Distributed Cache Service is running on my Application Server and on my web front end.
All values are default.
Any idea why this is happening intermittently?
Love them all...regardless. - BuddhaHi,
From the error message, check whether super accounts were setup correctly.
Refer to the article about configuring object cache user accounts in SharePoint Server 2013::
https://technet.microsoft.com/en-us/library/ff758656(v=office.15).aspx
if the issue exists, please check the SharePoint ULS log located at : C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\15\LOGS to get a detailed error description.
Best Regards,
Lisa Chen
TechNet Community Support
Please remember to mark the replies as answers if they help, and unmark the answers if they provide no help. If you have feedback for TechNet Support, contact
[email protected] -
Local Cache containing all Distributed Cache entries
Hello all,
I am seeing what appears to be some sort of problem. I have 2 JVMS running, one for the application and the other serving as a coherence cache JVM (near-cache scheme).
When i stop the cache JVM - the local JVM displays all 1200 entries even if the <high-units> for that cache is set to 300.
Does the local JVM keep a copy of the Distributed Data?
Can anyone explain this?
Thankshi,
i have configured a near-cahe with frontscheme and back scheme.in the front scheme i have used local cache and in the back scheme i have used the distributed cache .my idea is to have a distributed cache on the coherence servers.
i have 01 jvm which has weblogic app server while i have a 02 jvm which has 4 coherence servers all forming the cluster.
Q1: where is the local cache data stored.? is it on the weblogic app server or on the coherence servers (SSI)..
Q2: although i have shutdown my 4 coherence servers..i am still able to get the data in the app.so have a feel that the data is also stored locally..on the 01 jvm which has weblogic server runnng...
q3: does both the client apps and coherence servers need to use the same coherence-cache-config.xml
can somebody help me with these questions.Appreciate your time.. -
Cache config for distributed cache and TCP*Extend
Hi,
I want to use distributed cache with TCP*Extend. We have defined "remote-cache-scheme" as the default cache scheme. I want to use a distributed cache along with a cache-store. The configuration I used for my scheme was
<distributed-scheme>
<scheme-name>MyScheme</scheme-name>
<backing-map-scheme>
<read-write-backing-map-scheme>
<internal-cache-scheme>
<class-scheme>
<class-name>com.tangosol.util.ObservableHashMap</class-name>
</class-scheme>
</internal-cache-scheme>
<cachestore-scheme>
<class-scheme>
<class-name>MyCacheStore</class-name>
</class-scheme>
<remote-cache-scheme>
<scheme-ref>default-scheme</scheme-ref>
</remote-cache-scheme>
</cachestore-scheme>
<rollback-cachestore-failures>true</rollback-cachestore-failures>
</read-write-backing-map-scheme>
</backing-map-scheme>
</distributed-scheme>
<remote-cache-scheme>
<scheme-name>default-scheme</scheme-name>
<initiator-config>
<tcp-initiator>
<remote-addresses>
<socket-address>
<address>XYZ</address>
<port>9909</port>
</socket-address>
</remote-addresses>
</tcp-initiator>
</initiator-config>
</remote-cache-scheme>I know that the configuration defined for "MyScheme" is wrong but I do not know how to configure "MyScheme" correctly to make my distributed cache the part of the same cluster to which all other caches, which uses the default scheme, are joined. Currently, this ain't happening.
Thanks.
RG
Message was edited by:
user602943Hi,
Is it that I need to define my distributed scheme with the CacheStore in the server-coherence-cache-config.xml and then on the client side use remote cache scheme to connect to get my distributed cache?
Thanks, -
Distributed cache with a backing-map as another distributed cache
Hi All,
Is it possible to create a distributed cache with a backing-map-schem with another distributed cache with local-storage disabled ?
Please let me know how to configure this type of cache.
regards
SHi Cameron,
I am trying to create a distributed-schem with a backing-map schem. is it possible to configure another distributed queue as a backing map scheme for a cache.
<distributed-scheme>
<scheme-name>MyDistCache-2</scheme-name>
<service-name> MyDistCacheService-2</service-name>
<backing-map-scheme>
<external-scheme>
<scheme-name>MyDistCache-3</scheme-name>
</external-scheme>
</backing-map-scheme>
</distributed-scheme>
<distributed-scheme>
<scheme-name>MyDistCache-3</scheme-name>
<service-name> MyDistBackCacheService-3</service-name>
<local-storage>false</local-storage>
</distributed-scheme>
Please correct my understanding.
Regards
Srini
Maybe you are looking for
-
How do I create a save button to hide a dropdown and pass on the dropdown data to a label?
Hi InDesign Scripting Experts! I'm desparetly looking for some help to get this issue resolved. Essentially I've a lot of PDF forms that've been designed by using InDesign CS4 and then converted to an interactive PDF via Adobe Lifecycle. The final 2
-
When i plug my ipad into computer it says i need to update to itunes version 10.6.3
when i plug my ipad into computer it says i need to update to itunes version 10.6.3
-
How to view images stored in the database
Hi all I've a table in the db lets say T and two cols (ID & IMAGE). IMAGE data type is ORDIMAGE. I want to view this image in the web browser using the Dyanmic Page in Oracle Portal. Please anyone help me. Regards
-
Converting your MUSIC to a RINGTONE 'without GOOGLE help'
Took me a while to work this out and after much hair pulling and ‘without’ the help of Google or a 3rd party program see below. Good luck:- • Open I Tunes • Find the song you want as a ringtone • Double Click the song to play it and make a note of th
-
Error to execute cloning using EMGrid Control
Hi Guys, I need some help please..... I am really getting crazy with this Oracle Grid Control, I am working over windows server 2003, databases 9i, and 10g....I have been trying to make a clone from my PROD database so I could put it as a standby dat