Cache persistent
Hi Gurus,
can any body tell about the Lookup Cache Persistent? if i enable this option what will happen in this lookup transformation?
Thanks,
pcs
Edited by: 848485 on Mar 31, 2011 5:15 AM
Edited by: 848485 on Mar 31, 2011 5:15 AM
Hi,
The persistent cache allows to save the lookup cache and use them the next time informatica processes lookup transformation.
This concept works only if the lookup transformation is configured to use the cache.
Regards,
Shusr
Similar Messages
-
If we set cache persistant time in physical table is there any issues we wi
hi all
one small question i have that is ,
with out using event pooling tables,if i set the cache persistent time ,will i face any problem?
any disadvantages will i face ?
Thankshi,
set the cache persistent time ,will i face any problem?no problem at all but it is for each any every time .....let say if u set the time as 2minutes but after a year some one has updated the table wanted to see results in a minute then u already setted the parameter to 2 minutes the cache presistent time u need to change manually where as coming ton the event polling no need of that it ll update directly...maintainence purpose event polling is best one
BTW,one link to answer your cache q's
http://obiee101.blogspot.com/search/label/CACHE
Dont forget to close the previous threads
Thanks,
Saichand.v -
How to disable or decrease Cache persistance time in dashboard?
Hi everyone,
I have used the AUTHORIZATION concept in my OBIEE project with the help of AUTHORIZATION table in SQL database.Here,in database table, records have User-ID with other coloumns like name, email, etc with groupname also.
It is working properly, but when i make some changes in AUTHORIZATION table in database , like adding new user or changing group membership of some existing user, it doest reflect in dashboard link IMMEDIATELY.
For example, user SHYAM is member of Organization Head group and Vertical Head group. So, he can see report of both the groups.
But , if we update his group membership to Vertical Head group only, by deleting Organization Head group membership in Database,
the respective change do not reflect in dashboard.
I have to restart the obiee services on server.
only then changes are visible.
Is there any way to disable the CACHE completely....
or Atleast decrease the cache persistance time to least , i.e, few seconds or completely to zero.
Please help me by giving some solution.....In your init block which fetches the group membership for a particular user for the variable target chose row wise intialization and unmark 'Use caching' option.
-
MSI Installer Cache & "Persist in Client Cache"
Hello,
I'm predominantly using the Application Model and just a few Packages. I'm noticing on Software Deployments that the Downloaded Content eventually gets purged and then when a user goes to launch the software it states that it cannot find the reference
MSI within CCMCache. I talked to some other people and they said that Packages would automatically put the cache in the Windows Installer Database to eliminate that issue and weren't sure how the Application Model handled this. Is the only way
for me to fix this to set the application to 'Persist in Client Cache?' If so, I have a lot of things to redo.
Looking for advice and how this is supposed to work under both the Application and Package Model.
Thanks!As long as you supply the product code at the bottom of the Programs tab on the Deployment Type, then the same thing will happen for applications. Essentially, the client agent adds a location from the DP to the source path for the MSI to enable
repairs.
Jason | http://blog.configmgrftw.com
Hmm. These were standard MSI's that were imported so all of those codes should have imported in correctly? Not sure why I'd be seeing issues from clients. -
Hi Guys,
I am facing this issue of purging the Cache of the Repository everydat manually.
Is there any mechanism through whihc I can purge the cache automatically after a day or after certain size of Cache?
Thanks
SaurabhHi Saurabh,
To purge the cache automatically you have to set the cache persistent time in the tables present in the physical layer. There you can mention the time after which you want to purge the cache. The steps are provided below:
1. Double click on the table in the physical layer.
2. Select the General tab.
3. Select the Cacheable option.
4. Select the Cache Persistent time.
5. Specifiy the time interval when you need the cache to be refreshed.
You have to do the same for all tables for which you want to purge the cache. -
Cache purging in Physical table
Hi Gurus,
there is cache never expire and cache persisitent time avl in every physical table avl in physical layer of obiee. i wanted to know if we will set the cache persistent time for 10 mins then how it will purge the cache automatically
thanksHi Gurus,
I totally aggre with you but can you please pay you kind attenetion towards bwlow points
Are you sure that the physical table cache purging done oracle bi server
and that is not visible to track by the Administratortr
what about cache never expires-- can we purge this cache from cache manager
please make it sure
thanks -
Hi Guys,
I am facing this issue of purging the Cache of the Repository everydat manually.
Is there any mechanism through whihe I can purge the cache automatically after a day or after certain size of Cache?To purge the cache automatically you have to set the cache persistent time in the tables present in the physical layer. There you can mention the time after which you want to purge the cache. The steps are provided below:
1. Double click on the table in the physical layer.
2. Select the General tab.
3. Select the Cacheable option.
4. Select the Cache Persistent time.
5. Specifiy the time interval when you need the cache to be refreshed.
You have to do the same for all tables for which you want to purge the cache. -
Hello,
Inorder To "worm up the cache" of a report i've made a web template to it, changed its properties (via RSRT) to cache mode 4-persistent cache across each application server, persistent mode 3-tranparent table(BLOB) and generate the report. The report use a variable of calc year\month. I made a variant for the variable with the values 1.2005 - 12.2005. When i run the report with exactly the same values in the selection screen it runs very fast but when i put 1.2005 - 9.2005 it becomes very slow like i didnt worm up the cache...
Please Advice,
DavidI have to agree with Jeff LaRuso on this. It should work based on what you have described.
Although I'm sure you have it setup, let's backup a bit and verify that Global Cache is active on the Appl Servers the query is running on. Even though you are not caching to main memory, global cache still needs to be on. The query could be looking in the local session cache and finding the results which could explain why the idnetical query runs very fast.
When you say you have the variable marked as can be changed during navigation - that means you have an X in the box, correct?
You've verified thta the variable you are using in the query is the variable you have marked as changeable?
Any idea how big your global cache is? There is a setting somewhere that sets athe maximum size of a result thta can be cached as a percentage of the OLAP cache - I'm not certain if that would apply to a result set that is be cached persistently, but you never know.
What Version and Svc Pack are you on?
Pizzaman:) -
Implementing Automatic cache purging
Hi All ,
I want to implement Automatic cache purging using Event pooling table in obiee..
i have followed one site, in that they asked me to crated one table in database ... table columns are as follows
1.update_type
2.update_date
3.databasename
4.catalogname
5.schemaname
6.tablename.
here i am having one doubt .. in my rpd , i am having two tables which are using in 4 catalogs . so.. my doubt is .. how should i came to know the table has come from particular catalog .. then i should i populated the catalog names in backend table ..
if any one knows please let me know.
Thanks
SreeHi,
The below links should help you
http://obiee101.blogspot.com/2008/03/obiee-manage-cache-part-1.html
and
http://oraclebizint.wordpress.com/2007/12/18/oracle-bi-ee-101332-scheduling-cache-purging/
To purge the cache automatically you have to set the cache persistent time in the tables present in the physical layer. There you can mention the time after which you want to purge the cache. The steps are provided below:
1. Double click on the table in the physical layer.
2. Select the General tab.
3. Select the Cacheable option.
4. Select the Cache Persistent time.
5. Specifiy the time interval when you need the cache to be refreshed.
You have to do the same for all tables for which you want to purge the cache
Thanks
Deva
Edited by: Devarasu on Sep 28, 2011 2:39 PM -
Hi All
BI server cache can be deleted through 1. BI tool --> Manage Cache, 2. Setting up Cache persistant time for that table & 3. Through Event Polling tables
how can we delete the presentation server cache? how many ways we can deleteHi,
You can manually Close all the cursors or running queries thru settings->Manage session log. However if you want to automate this refer to your instanceconfig file to get the cache directory and come up with a shell script to delete all the cache files in this directory. Hope this Helps
If this helps mark the same as helpful
Regards, -
If cache go down, how to bypass Transparent Caching in CSM ?
hi.
I configured transparent caching in CSM.
But cache go down, CSM dropped traffic to web server.
In this case, deed more configuration?
please me know how to bypass in cache go down .
thanks.
============== CSM configuration ==================
module ContentSwitchingModule 3
vlan 10 client
ip address 192.168.112.2 255.255.255.0
route 172.29.0.0 255.255.0.0 gateway 192.168.112.1
vlan 11 server
ip address 192.168.112.2 255.255.255.0
gateway 192.168.112.3
route 172.18.1.10 255.255.255.255 gateway 192.168.112.4
probe CACHE icmp
interval 2
retries 1
failed 2
receive 1
serverfarm CACHE
no nat server
no nat client
real 172.18.1.10
inservice
probe CACHE
serverfarm FORWARD
no nat server
no nat client
predictor forward
policy NOCACHE
client-group 10
serverfarm FORWARD
policy CACHE
serverfarm CACHE
vserver FROMCACHE
virtual 0.0.0.0 0.0.0.0 any
serverfarm FORWARD
persistent rebalance
inservice
vserver REDIRECT
virtual 0.0.0.0 0.0.0.0 tcp www
vlan 10
serverfarm CACHE
persistent rebalance
slb-policy NOCACHE
slb-policy CACHE
inservice
access-list 10 permit 172.29.1.10under your vserver redirect, instead of configuring 'serverfarm cache' configure 'serverfarm cache backup forward'
If the serverfarm cache goes down, the CSM will use the backup which is a forward.
However, in this case, the response from the server will probably not come back to the CSM, so you should configure the vserver with the command 'unidirectional' as well.
Regards,
Gilles. -
How do I create DRM Licenses that can be used for Offline Playback?
[ Background ]
Typically, a license is acquired from the DRM license server when the client video player encounters a video that is DRM-protected. However, some use cases call for the ability for end-users to be able to play DRM content when a network connection is not available to the device.
For example - the user wants to download a video from their home, acquire a license to play the content, and then get onto a train/airplane where a network connection isn't available. On the train/airplane/etc..., the end user expects to be able to watch their video that they downloaded at home.
[ DRM Policy Requirements ]
For this use case to be possible, the DRM license must be acquired when the network connection is available. In addition, the DRM license must be allowed to persist on the device disk. For this to be possible, one value must be present in the DRM policy:
1. Offline License Caching must be set to 1 or higher. This policy parameter indicates how many minutes a license is allowed to be persisted to disk once it's been acquired by the license server.
[ Workflow ]
Now that the content has been packaged with a DRM policy that allows for offline caching (persistance) of a license, the video player simply has to acquire the license before the network connection is lost. This can be accomplished via the typical API calls that the Video Player Application Developer would use to do a normal DRM license acquisition (DRMManager.loadVoucher()).
For example, by using the DRMManager ActionScript3 API: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/drm/DRMManage r.html
For other platforms supported by Adobe Access DRM (iOS, Android, etc...), please refer to the appropriate documentation around playback of DRM content on how to pre-fetch the DRM license.
[ Workflow - iOS ]
Enabling offline playback using the Primetime Media Player SDK (PSDK) for iOS requires a bit of self-extraction & instantiation of the DRM Metadata from the m3u8. The reason for this is that as of PSDK v1.2, the PSDK APIs do not expose an mechanism for extracting the DRM metadata.
You must manually (outside of the PSDK) get access to the DRM Metadata, instantiate a DRMContentData, and then use the DRMManager class to obtain a license for your content. Afterwards, you may download your HLS content and stream it to your video player application locally. Since the DRM license was pre-acquired before playback of the downloaded HLS content, your video player application does not require a network connection to acquire a DRM license. Below is a code sample of obtaining the DRM Metadata from the m3u8 manifest and then using it with the DRMManager singleton.
/* Main entry points, playlistURL is url to a top level m3u8. */
void PlayContent(NSURL* playlistURL)
DRMManager* drmManager = [DRMManager sharedManager];
if ([drmManager isSupportedPlaylist:playlistURL])
// First we need a handler for any errors that may occur
DRMOperationError errorHandler = ^(NSUInteger major, NSUInteger minor, NSError* nsErr)
/* report error. */
/* Synchronous call to DRM manager to extract DRM metadata out of the playlist. */
[_drmManager getUpdatedPlaylist:playlistURL
error:errorHandler
updated:^(NSURL *newPlaylist, DRMMetadata *newMetadata)
(void) newPlaylist; /* updated URL is not used for this code sample. */
if (newMetadata)
Assumes we are going to try to satisfy requirements to acquire license on the first policy in the metadata.
If there are multiple policies in the metadata, only 1 of them need to be satisfied for the license request to
go through.
DRMPolicy* drmPolicy = NULL;
for (id object in newMetadata.policies) {
DRMPolicy* drmPolicy = (DRMPolicy*)object;
break;
if (drmPolicy == NULL)
/* report error, this metadata is malformed. DRM metadata must contain at least 1 policy*/
if (drmPolicy.authenticationMethod != ANONYMOUS)
However, please note that the above only describes the DRM license handling for offline playback. The actual playback of video using the Primetime client SDK is not officially supported. You will have to use your own mechanism of saving and playback back HLS content locally within the application.
Theoretically, it is possible to save the HLS stream to the app, and then embed an HTTP server to the app to stream the HLS content back into the application. However, the Primetime client SDK doesn't not expose a way to do this directly, since it's not yet supported.
cheers,
/Eric.no, the headers, here's a picture of the headers on this page. When I use SEO for Firefox, it sees them on every other page but never on my iweb sites.
Quickpost this image to Myspace, Digg, Facebook, and others! -
Updating more than one database...
I am looking at a scenario were we, each time a change is made to several different caches, needs to update both a legacy database (main frame, non-relational = actually hierarchical) that are used by several large batch applications (that will be converted one by one over time to use the application built on top of the cache) as well as a new database (modern hardware & OS, relational, organized in a quite different way than the legacy database!).
The sync requirements are:
* It is REALLY bad if only one database get updated but not the other!
* The legacy database can be about 5 minutes behind the cache
* The new database is only used for making the cache persistent (it is fully loaded to cache at system startup) and has really no requirement at all on how much behind the cache it can be as long as data is never lost (we will use backup count = 1).
Updating the legacy database is NOT straight-forward and will require some data transformations to be performed. They can either be done in the legacy system (if we send the change info using some safe messaging for instance) or in the new environment if we want to update the old database directly.
A factor complicating things further is that the legacy system have some consistency requirements on the data (in particular when creating new objects that has relationships to each other, you may think about order / order row even if this is not the kind of application we have in mind).
For performance reasons (and since it seem from another thread that partial writes is a bigger problem with write through) we would probably like to use write behind to update the new database.
Anybody who have some experience of similar situations and can give some suggestions on how to best go about it?
Best Regards
MagnusI may also add that our current thinking is to use a "write beside" pattern for the legacy system and a write-behind pattern for the cache to update the new database. An XA transaction will then be used to make sure that both or neither the old system and the cache gets updated. Does this sound like a reasonable solution?
Best Regards
Magnus -
Hi,
I have tried an entity bean application. I am able to deploy my entity bean in WebLogic 7.0 server. But when I ran the client program I am getting the following error.
javax.naming.NameNotFoundException : Unable to resolve 'java:comp.env/ejb/AccountBean' Resolved: ' ' unresolved : 'java:comp': remaining name 'java:comp.env/ejb/AccountBean'
My ejb-jar.xml is as follows:
<ejb-jar>
<enterprise-beans>
<entity>
<ejb-name>AccountBean</ejb-name>
<home>AccountHome</home>
<remote>Account</remote>
<ejb-class>AccountBean<ejb-class>
<persistance-type>Bean</persistance-type>
<prim-key-class>AccountPK</prim-key-class>
<reentrant>False</reentrant>
<primkey-field>AccountID</primkey-field>
<env-entry>
<env-entry-name>jdbc.drivers</env-entry-name>
<env-entry-type>java.lang.String</env-entry-type>
<env-entry-value>oracle.jdbc.driver.OracleDriver</env-entry-value>
</env-entry>
<env-entry>
<env-entry-name>JDBC-URL</env-entry-name>
<env-entry-type>java.lang.String</env-entry-type>
<env-entry-value>jdbc:oracle:thin:@erp:1521:Oracle9i</env-entry-value>
</env-entry>
<ejb-ref>
<ejb-ref-name>ejb/AccountBean</ejb-ref-name>
<ejb-ref-type>Entity</ejb-ref-type>
<home>AccountHome</home>
<remote>Account</Account>
<ejb-link>AccountBean</ejb-link>
<ejb-ref>
<resource-ref>
<resource-ref-name>jdbc/bmp-account</resource-ref-name>
<resouce-ref-type>javax.sql.DataSource</resource-ref-type>
<res-auth>Container</res-auth>
<res-sharing-scope>sharable</res-sharing-sope>
<resource-ref>
</entity>
</enterprise-beans>
<assembly-descriptor>
</assembly-descriptor>
<ejb-jar>
My weblogic-ejb-jar.xml is as follows:
<weblogic-ejb-jar>
<weblogic-enterprise-bean>
<ejb-name>AccountBean</ejb-name>
<entity-descriptor>
<pool>
</pool>
<entity-cache>
<cache-between-transaction>false</cache-between-transcation>
</entity-cache>
<persistance>
</persistance>
<entity-clustering>
</entity-clustering>
</entity-descripto>
<transaction-descriptor>
</transaction-descriptor>
<reference-descriptor>
<resource-descriptor>
<res-ref-name>jdbc/bmp-accoun</res-ref-name>
<jndi-name>MariJNDI</jndi-name>
</resource-descriptor>
<ejb-reference-descriptor>
<ejb-ref-name>ejb/AccountBean</ejb-ref-name>
<jndi-name>AccountBean</jndi-name>
</ejb-reference-descriptor>
</reference-descriptor>
</weblogic-enterprise-bean>
</weblogic-ejb-jar>
My Client Program is as follows:
import javax.ejb.*;
public class Client{
public static void main(String[] args){
Account account = null;
try{
Context = new InitialContext(System.getProperties());
AccountHome home = (AccountHome) ctx.lookup("java:comp/env/ejb/AccountBean");
home.create("123-456-7890","John");
}catch(...){
Any help is appreciated.
Thanks in advance.
Regards,
Mari.Hi,
My Client Program is as follows:
import javax.ejb.*;
public class Client{
public static void main(String[] args){
Account account = null;
try{
Context = new
Context = new
InitialContext(System.getProperties());
AccountHome home = (AccountHome)
AccountHome)
ctx.lookup("java:comp/env/ejb/AccountBean");
home.create("123-456-7890","John");
}catch(...){
}What are the System Properties? Are you running the client remotely? Are you running the client as a stand alone Java app or using the Weblogic's Client Runner? 'java:comp' as the jndi name is usually used when the client runs inside the container... But it doesn't seem to be the case.
Kexkey -
Predictor Forward in CSM (catalyst 6509)
--begin ciscomoderator note-- The following post has been edited to remove potentially confidential information. Please refrain from posting confidential information on the site to reduce security risks to your network. -- end ciscomoderator note --
Hello:
My problem is that not can make that the Catalyst, forwarded packets come vlan client when the Cache is down. Adjunt config.6509#sh runn
Building configuration...
Current configuration : 5084 bytes
version 12.1
service timestamps debug uptime
service timestamps log uptime
no service password-encryption
hostname router
boot system slot0:c6sup22-ps-mz.121-13.E3.bin
enable secret 5 xxxxxxxxxxxxxxxxxxxxx
enable password xxxxxxxxxx
ip subnet-zero
no ip domain-lookup
ip slb mode csm
ip slb vlan 30 server
ip address 192.168.198.2 255.255.255.0
gateway 192.168.198.200
ip slb vlan 40 server
ip address X.X.0.1 255.255.255.0
ip slb vlan 20 client
ip address 10.1.1.1 255.255.255.0
ip slb probe PRUEBA icmp
address X.X.0.2
interval 5
retries 1
ip slb serverfarm CACHE
no nat server
no nat client
real X.X.0.2
inservice
probe PRUEBA
ip slb serverfarm ROUTE
no nat server
no nat client
predictor forward
ip slb vserver FROMCACHE
virtual 0.0.0.0 0.0.0.0 any
vlan 40
serverfarm ROUTE
persistent rebalance
inservice
ip slb vserver HTTP
virtual 0.0.0.0 0.0.0.0 tcp www
vlan 20
serverfarm CACHE
persistent rebalance
inservice
ip slb vserver INTERNET
virtual 0.0.0.0 0.0.0.0 any
vlan 20
serverfarm ROUTE
persistent rebalance
inservice
ip slb vserver RESPONSE
virtual 0.0.0.0 0.0.0.0 any
vlan 30
serverfarm CACHE backup ROUTE
persistent rebalance
inservice
ip slb vserver RTSP
virtual 0.0.0.0 0.0.0.0 tcp rtsp service rtsp
vlan 20
serverfarm CACHE
persistent rebalance
inservice
ip slb vserver WMT
virtual 0.0.0.0 0.0.0.0 tcp 1755
vlan 20
serverfarm CACHE
persistent rebalance
inservice
no dss interface-purge
no dss range-purge
no dss mac-purge
mls rp ip
no mls netflow
mls flow ip destination
mls flow ipx destination
redundancy
mode rpr-plus
main-cpu
auto-sync running-config
auto-sync standard
interface FastEthernet6/12
no ip address
switchport
switchport access vlan 20
interface FastEthernet6/36
no ip address
duplex full
speed 100
switchport
switchport access vlan 40
interface FastEthernet6/46
no ip address
switchport
interface FastEthernet6/47
no ip address
switchport
switchport access vlan 30
interface FastEthernet6/48
no ip address
switchport
switchport access vlan 30
interface Vlan1
ip address 192.1.1.1 255.255.255.0
interface Vlan20
ip address 10.1.1.2 255.255.255.0
interface Vlan30
ip address 192.168.198.10 255.255.255.0
interface Vlan40
ip address X.X.0.10 255.255.255.0
ip default-gateway 192.168.198.200
ip classless
ip route 0.0.0.0 0.0.0.0 192.168.198.200
no ip http serverWhen the cache engine goes down, the switch should be forwarding w/o using the cache. There is a keepalive mechanism to keep track of this. The switch and the cache exchange keepalives regularly. Check if there is a problem with the keepalives.
Maybe you are looking for
-
Wifi no longer connects to internet
In OSX 6.8 I was able to use my AirPort setup to share my internet access with my ipod, but with the upgrade to Mavericks Airport has become WiFi and will not access my internet connection. I've gone through all the Assistance and online troubleshoot
-
List view for browsing playlists?
Is there anyway to get a list of playlists, like the sidebar in iTunes on the Mac, when browsing Music on the iPad? Thanks
-
Measurement of Respiratory Signals through oximeter
heHi, This is Madhusudhanan Natarajan from Anjalai Ammal Mahalingam Engg College, Kovilvenni. I would like to ask some queries related to the DAQ products.Actually, We are going to upgrade the research work of oximeter in sleep monitoring system. For
-
Hi I am in BT111S_OPPT. i want to capture saved search name , how to do. please suggest. thanks ramakrishna
-
Restoring a deleted Collab Server document
Has anybody found a proven method of restoring a document that was accidetnly deleted from a Collab Server project. We are starting to use Collab V4 for document sharing but have been unable to find any documented process to recover a document that a