12.1.4 build 53 Transaction caching problem
Hello,
I am seeing something new with this 12.1 .
When ever i call a transaction within a transaction repetitively...(may be 5000 times), I see all the calls are recorded in the transaction manager .. and this is loading the XMII_Transaction_Cache table in the database...
It's also keeping a log of all the tracer output.
How to turn this off?
because, this set up is loading the DB heavilly and crashing the system.
Regards,
Pramod
Hi.
First of all you should try and upgrade to at least MII SP5, this is a bit different in handling this. Second you can try one of two things.
If I remember right, the old TRX action block contains a parameter, where you can link the persist parameter, the new trx action block contains a config. parameter, where you can set the value. This will make the caching delete the record every 5 minutes or so. Or second, you can add the persist = onerror to the URL where you call the trx. from or if you schedule it.
But I think that the new version is different in handling this, and do not save the intermediate action calls at all i no error occours.
BR
Poul.
Similar Messages
-
Coherence 3.6.0 transactional cache and POF - NULL values
Hi,
We are trying to use the new transactional scheme defined in 3.6.0 and we encounter an abnormal behaviour. The code executes without any exception or warnings but in the cache we find the key associated with a NULL value.
To try to identify the problem, we defined two services (see cache-config below):
- one transactional cache
- one distributed cache
If we try to insert into transactional cache primitives or strings everything is normal (both key and value are visible using coherence console). But if we try to insert custom classes using POF, the key is inserted with a NULL value.
In same cluster we defined a distributed cache that uses the same POF classes/configuration. A call to put will succeed in any scenario (both key and value are visible using coherence console).
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
<caching-scheme-mapping>
<cache-mapping>
<cache-name>cnt.*</cache-name>
<scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
</cache-mapping>
<cache-mapping>
<cache-name>stt.*</cache-name>
<scheme-name>storage.distributedcache.stt.scheme</scheme-name>
</cache-mapping>
</caching-scheme-mapping>
<caching-schemes>
<transactional-scheme>
<scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
<service-name>storage.transactionalcache.cnt</service-name>
<thread-count>10</thread-count>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>cnt-pof-config.xml</param-value>
</init-param>
</init-params>
</serializer>
<backing-map-scheme>
<local-scheme>
<high-units>250M</high-units>
<unit-calculator>binary</unit-calculator>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</transactional-scheme>
<distributed-scheme>
<scheme-name>storage.distributedcache.stt.scheme</scheme-name>
<service-name>storage.distributedcache.stt</service-name>
<serializer>
<class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
<init-params>
<init-param>
<param-type>String</param-type>
<param-value>cnt-pof-config.xml</param-value>
</init-param>
</init-params>
</serializer>
<backing-map-scheme>
<local-scheme>
<high-units>250M</high-units>
<unit-calculator>binary</unit-calculator>
</local-scheme>
</backing-map-scheme>
<autostart>true</autostart>
</distributed-scheme>
</caching-schemes>
</cache-config>
Failing code (uses transaction APIs 3.6.0):
public static void main(String[] args)
Connection con = new DefaultConnectionFactory().createConnection("storage.transactionalcache.cnt");
con.setAutoCommit(false);
try
OptimisticNamedCache cache = con.getNamedCache("cnt.t1");
CId tID = new CId();
tID.setId(11111L);
C tC = new C();
tC.setVal(new BigDecimal("100.1"));
cache.insert(tID, tC);
con.commit();
catch (Exception e)
e.printStackTrace();
con.rollback();
finally
con.close();
Code that succeeds (but without transaction APIs):
public static void main(String[] args)
try
NamedCache cache = CacheFactory.getCache("stt.t1");
CId tID = new CId();
tID.setId(11111L);
C tC = new C();
tC.setVal(new BigDecimal("100.1"));
cache.put(tID, tC);
catch (Exception e)
e.printStackTrace();
finally
And here is what we list using coherence console if we use transactional APIs:
Map (cnt.t1): list
CId {
id = 11111
} = null
Any suggestion, please?Cristian,
After looking at your configuration I noticed that your configuration is incorrect. For a transactional scheme you cannot specify a backing-map-scheme.
Your config contained:
<backing-map-scheme>
<local-scheme>
<high-units>250M</high-units>
<unit-calculator>binary</unit-calculator>
</local-scheme>
</backing-map-scheme>To specify high-units for a transactional scheme, simply provide a high-units element directly under the transactional-scheme element.
<transactional-scheme>
<scheme-name>small-high-units</scheme-name>
<service-name>TestTxnService</service-name>
<autostart>true</autostart>
<high-units>1M</high-units>
</transactional-scheme>http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_transactionslocks.htm#BEIBACHA
The reason that it is not allowable to specify a backing-map-scheme for a transactional scheme is that transactional caches use their own storage.
I am not sure why this would work with primitives and only fail with POF. We will look into this further here and try to reproduce.
Can you please change your configuration with the above changes and let us know your results.
Thanks,
John
Edited by: jspeidel on Sep 16, 2010 10:44 AM -
Caching problem w/ primary-foreign key mapping
I have seen this a couple of times now. It is not consistent enough to
create a simple reproducible test case, so I will have to describe it to you
with an example and hope you can track it down. It only occurs when caching
is enabled.
Here are the classes:
class C1 { int id; C2 c2; }
class C2 { int id; C1 c1; }
Each class uses application identity using static nested Id classes: C1.Id
and C2.Id. What is unusual is that the same value is used for both
instances:
int id = nextId();
C1 c1 = new C1(id);
C2 c2 = new C2(id);
c1.c2 = c2;
c2.c1 = c1;
This all works fine using optimistic transactions with caching disabled.
Although the integer values are the same, the oids are unique because each
class defines its own unique oid class.
Here is the schema and mapping (this works with caching disabled but fails
with caching enabled):
table t1: column id integer, column revision integer, primary key (id)
table t2: column id integer, column revision integer, primary key (id)
<jdo>
<package name="test">
<class name="C1" objectid-class="C1$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t1"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c2">
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
<class name="C2" objectid-class="C2$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t2"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c1">
<extension vendor-name="kodo" key="dependent" value="true"/>
<extension vendor-name="kodo" key="inverse-owner" value="c2"/>
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="table" value="t1"/>
<extension vendor-name="kodo" key="ref-column.id" value="id"/>
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
</package>
</jdo>
Because the ids are known to be the same, the primary key values are also
used as foreign key values. Accessing C2.c1 is always non-null when caching
is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
null. When it is null we get warnings about dangling references to deleted
instances with id values of 0 and other similar warnings.
The workaround is to add a redundant column with the same value. For some
reason this works around the caching problem (this is unnecessary with
caching disabled):
table t1: column id integer, column id2 integer, column revision integer,
primary key (id), unique index (id2)
table t2: column id integer, column revision integer, primary key (id)
<jdo>
<package name="test">
<class name="C1" objectid-class="C1$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t1"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c2">
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="column.id" value="id2"/>
</extension>
</field>
</class>
<class name="C2" objectid-class="C2$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t2"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c1">
<extension vendor-name="kodo" key="dependent" value="true"/>
<extension vendor-name="kodo" key="inverse-owner" value="c2"/>
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="table" value="t1"/>
<extension vendor-name="kodo" key="ref-column.id" value="id2"/>
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
</package>
</jdo>
Needless to say, the extra column adds a lot of overhead, including the
addition of a second unique index, for no value other than working around
the caching defect.Tom-
The first thing that I think of whenever I see a problem like this is
that the equals() and hashCode() methods of your application identity
classes are not correct. Can you check them to ensure that they are
written in accordance to the guidelines at:
http://docs.solarmetric.com/manual.html#jdo_overview_pc_identity_application
If that doesn't help address the problem, can you post the code for your
application identity classes so we can double-check, and we will try to
determine what might be causing the problem.
In article <[email protected]>, Tom Landon wrote:
I have seen this a couple of times now. It is not consistent enough to
create a simple reproducible test case, so I will have to describe it to you
with an example and hope you can track it down. It only occurs when caching
is enabled.
Here are the classes:
class C1 { int id; C2 c2; }
class C2 { int id; C1 c1; }
Each class uses application identity using static nested Id classes: C1.Id
and C2.Id. What is unusual is that the same value is used for both
instances:
int id = nextId();
C1 c1 = new C1(id);
C2 c2 = new C2(id);
c1.c2 = c2;
c2.c1 = c1;
This all works fine using optimistic transactions with caching disabled.
Although the integer values are the same, the oids are unique because each
class defines its own unique oid class.
Here is the schema and mapping (this works with caching disabled but fails
with caching enabled):
table t1: column id integer, column revision integer, primary key (id)
table t2: column id integer, column revision integer, primary key (id)
<jdo>
<package name="test">
<class name="C1" objectid-class="C1$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t1"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c2">
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
<class name="C2" objectid-class="C2$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t2"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c1">
<extension vendor-name="kodo" key="dependent" value="true"/>
<extension vendor-name="kodo" key="inverse-owner" value="c2"/>
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="table" value="t1"/>
<extension vendor-name="kodo" key="ref-column.id" value="id"/>
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
</package>
</jdo>
Because the ids are known to be the same, the primary key values are also
used as foreign key values. Accessing C2.c1 is always non-null when caching
is disabled. With caching is enabled C2.c1 is usually non-null but sometimes
null. When it is null we get warnings about dangling references to deleted
instances with id values of 0 and other similar warnings.
The workaround is to add a redundant column with the same value. For some
reason this works around the caching problem (this is unnecessary with
caching disabled):
table t1: column id integer, column id2 integer, column revision integer,
primary key (id), unique index (id2)
table t2: column id integer, column revision integer, primary key (id)
<jdo>
<package name="test">
<class name="C1" objectid-class="C1$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t1"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c2">
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="column.id" value="id2"/>
</extension>
</field>
</class>
<class name="C2" objectid-class="C2$Id">
<extension vendor-name="kodo" key="jdbc-class-map" value="base">
<extension vendor-name="kodo" key="table" value="t2"/>
</extension>
<extension vendor-name="kodo" key="jdbc-version-ind"
value="version-number">
<extension vendor-name="kodo" key="column" value="revision"/>
</extension>
<field name="id" primary-key="true">
<extension vendor-name="kodo" key="jdbc-field-map" value="value">
<extension vendor-name="kodo" key="column" value="id"/>
</extension>
</field>
<field name="c1">
<extension vendor-name="kodo" key="dependent" value="true"/>
<extension vendor-name="kodo" key="inverse-owner" value="c2"/>
<extension vendor-name="kodo" key="jdbc-field-map" value="one-one">
<extension vendor-name="kodo" key="table" value="t1"/>
<extension vendor-name="kodo" key="ref-column.id" value="id2"/>
<extension vendor-name="kodo" key="column.id" value="id"/>
</extension>
</field>
</class>
</package>
</jdo>
Needless to say, the extra column adds a lot of overhead, including the
addition of a second unique index, for no value other than working around
the caching defect.
Marc Prud'hommeaux [email protected]
SolarMetric Inc. http://www.solarmetric.com -
Desktop AIR app debug build running from cache?
I'm new to flash builder (using 4.5 for os x) and I've been working on a desktop AIR app for the last week. I've been running into a strange issue when deugging my app. Every once and a while (maybe 4-5 times a day) after I make changes to my code and I make a new debug build, a much older version of the app runs when I run the app with or without the debugger, or even if I export for release. It's as though its running (or building) from the cache, except I'm building a desktop app and I'm not testing the swf on a browser. I've turned off "Build Automatically", cleaned the project and build again but no changes are reflected. I can't figure out how, but after cleaning, quitting flash builder, restarting, cleaning, building again, etc.. numerous times suddenly I will see the current version of the app back up and running. I cannot for the life of me figure out how to quickly resolve the issue, or what exactly is fixing it. Every time it seems like something else triggers FB to refresh the build. I can even run the debugger and step through my current code while seeing completely different results on the screen.
Has anyone else run into this issue? Anyone know what I may be missing? Are there settings I need to change besides "Build Automatically"? Thanks for your help,
NickThanks for the answer. How exactly Flash Builder is running is a little clearer to me now. I checked and the timestamps were indeed being updated correctly.
This helped me to look elsewhere for the issue. I actually now think it may have to do with mobile home sync that I have going through OS X. Whenever the sync runs, FB seems to launch an outdated (as in hours ago) version of the app. Despite the fact that the sync is not pulling in new files, it seems to be the issue. If i stop the sync I start seeing the current version of the app again when I run the debugger. So I think its just the OS causing my problem (although I still don't completely understand why exactly the issue would manifest itself in FB). Fortunately I can recreate the issue every time I run the sync, so the problem is consistent at least!
Thanks again for your help. -
"Build and Export Cache" fails to export 100% Previews:
"Build and Export Cache" fails to export 100% Previews:
I am moving from Bridge CS5 to CS6 and have a very large number of 100% Previews in my CS5 cache. The idea of regenerating all these 100% Previews in CS6 is NOT amusing. It is amazing that there is not a simple import function for this task as a way to make the software upgrade more "seamless"… but apparently there is not.
I have read the Adobe documents:
"Bridge Help / Manage the cache | Adobe Bridge CS4, CS5"
"Creative Suite / Work with the Adobe Bridge cache"
Based on these documents, in Bridge CS5 I used the export function in
Tools>Cache>Build and Export Cache…
with "Build 100% Previews" checked
and "Export Cache To Folders" checked.
Unfortunately, the "Build and Export Cache" fails to export 100% Previews to the target directory. They should appear in the target directory as hidden files in the format xxxxx.NEF.JPG. Yes, I do know how to view hidden files in both the Finder and in Bridge.
In Bridge CS5 I tried Purging and rebuilding the Cache for the target directory with no luck.
I tried running "Build and Export Cache" in Bridge CS6…. still no hidden Preview files in the target directory.
My settings in both CS5 & CS6 Bridge - Preferences:
"Keep 100% Previews in Cache" is checked
"Automatically Export Cache to Folders When Possible" is checked
My settings in both CS5 & CS6 Bridge - Options for thumbnail quality and preview generation:
"Generate 100% Previews" is checked
Using OS X 10.6.8
Anyone have a suggestion?????Bridge is the only piece of software I have used, which incorporates a database, where new versions do not provide for importing the objects of the older version. Very strange.
That is a very mildly conclusion but unfortunately until the cache strategy for Bridge is finally sorted out this problem will exist.
Now I have to confess that I can't use the 'Auto export cache when possible' for several versions now due to a rare problem that after the first time usage without problem created export cache almost every second time generates a warning message that due to failure it needs to replace the Cache T file, hence I deselected this option permanently.
I would think that if you have those already cached and exported files for 100 % preview in the dedicated folders it would read those files and add it to the database and central cache file when having pointed Bridge to that folder, but I'm far from sure about that.
Also after rereading your first post you state to expect the build and export cache to build also hidden files with the two extensions from original filetype followed by .jpg from cache in the folder itself (target directory)? Here I'm a bit lost. The mentioned double extension files only appear in the central cache folder and are visible but buried in the subfolders for each quality in the central cache file at user library (default) or custom set location in Bridge prefs.
The exported cache to folder are hidden files and as long as I can remember that where 2 files already in the File browser (forgot the extension they used for this) and currently .BridgeCache (very small, few KB) and the for me problematic .BridgeCacheT file that can grow very large and has about the same size as the subfolder generated under full in the central cache.
I can see the content of the central cache without problem including the double extensions and previews of the files as thumbs in the Finder, jet with option to show hidden files I can see the CacheT file but only as a blank document icon and I can't find an application on my system to open this file to reveal its content.
Just tried it on a folder with small sized files and having set preview quality to HQ and 100 % preview (checkerboard icon in Path Bar) and then used the option from menu tools to build and export cache to folder including 100 %. Both subfolder in Central Cache and about same sized hidden CacheT file where generated without problems and the full preview was instantly on demand using magnifier. (but all this without the option in prefs to auto export to folder)
So to me it seems (if I understand everything correctly) that either the existing exported files from CS5 do cause the problem or your setup of CS6 is not working correctly.
Are we still on the same track or am I completely lost? -
Qaaws not refreshing query triggered from Xcelsius, maybe a cache problem
Hi,
I'm having a problem with QAAWS and Xcelsius
I'm using a List Builder component to select multiple values in this case STATES from the efashion universe
I use the selected states as values to feed a prompt in a QAAWS query, the qwaas query has the SALES REVENUE as the resultset and in the conditions it has a multi prompt for STATES
When I preview my dashboard, I select the States, then UPDATE the values and then refresh the query with a CONNECTION REFRESH button, The first time I do this it works fine and returns the Sales revenue.
If I add a new State to my selection and then run update and run run again the query with the refresh button, it doesn't work any more and it shows again the value retrieved from the first query
First I thought that the query wasn't triggered by Xcelsius, but by doing some more tests I found that actually the query runs but it returns the value from the first query
I think this is a cache problem , so is there a way to tell QAAWS to always run the query and not use the cache?
thanks,
AlejandroHello Alejandro,
QaaWS indeed uses a cache mechanism to speed up some Xcelsius interactions (from XI 3.0 onwards), but your issue should not be induced by this, as cache sessions are discriminated according to session user id & prompt values, so if you are correctly passing prompt values, QaaWS should not serve you with the previous values by error.
Could you specify how you are passing several prompt values to the QaaWS? There might be an issue there, so make sure that:
1. QaaWS query prompt is set using In List operator, otherwise only first value will actually be taken into account,
2. In Xcelsius Designer Data Manager, web service input paremeters are duplicated to accept several input values (you cannot submit you list of prompt values as a list to a single input parameter).
If this still does not work, I'd suggest you debug your dashboard runtime using an HTTP sniffer like Fiddler (available from http://www.fiddler2.com/) which enable you to inspect the sent & recieved HTTP messages with the server, where you should verify which prompt values are sent to the QaaWS servlet.
FYI, you can set the QaaWS cache lifetime for each query, by going into QaaWS edition first wizard screen, click Advanced... button and change value for timeout parameter (default is 60 seconds)
Hope that helps,
David. -
Adobe Flash CC 2014 Code Hint - Unable to build code hint cache...
Hello! I just downloaded and tried to use Flash CC 2014. Code Hinting doesn't work! So I tried some way to bring it back. And the following solution worked:
1. Quit Flash
2. Delete Code Hint Folder: C:\Users\<user>\AppData\Local\Adobe\Flash CC 2014\en_US\Configuration\CodeModel
3. Open Flash and create a new document. Edit => Preferences => Code Editor. Change "Cache Files" to 1500
4. write: import<space>
You should see that warning: "Building Code Hint Cache...". If not, press Ctrl + <Space>
5. Wait 1 minute and enjoy your code hinting
Hope this help for someone...
(Sorry for my poor English)I have the very same problem, Im on OSX though (Mavericks), but I found the CodeModel folder in Library/Application Support/Adobe... and deleted it.
Followed all the steps and nothing still happens when I type import followed by the spacebar. I havent seen any "cache being built" messages.
I have the latest version of Flash CC 2014 according to my CC manager.
Please help, coding suddenly became really difficult.
Regards
Edit: I have also not deleted the "." folder in the Source Path -
I've already posted this in the Bridge forum, but I thought it might just be relevant to ACR too, so here goes...
I've just bought a Panasonic GF2. ACR support was introduced with 6.3, so I have no problems with opening its raw files in 6.3 or 6.4RC.
If I click on a newly-uploaded raw thumbnail in Bridge, a high quality preview is generated, as expected. However, if I select the entire folder, and select 'Build and Export Cache', nothing happens - I am returned to Bridge as if nothing happened, and the thumbnails stay showing the embedded versions.
This is something I do regularly with my Nikon D300 raw files. Bridge usually counts through the images, and the previews are updated with the ACR-generated HQ versions. But no so with the new GF2.
I wondered if this had anything to do with GF2 support in ACR, rather than Bridge itself?Curt Y wrote:
I checked the site http://www.adobe.com/products/photoshop/cameraraw.html and there is no Panasonic GF-2 listed for ACR 6.3
I thought 6.4 was releaseed, but all I see is 6.4 release canidate.
In fact all the cameras begin with DMC. This camera mey be too new. Have seen some posts to suggest it can take as long a 2 months to get a new model covered.
It's the 7th one down!
I can open the raw images fine, and if I actually click on a thumbnail or several thumbnails, Bridge generates HQ previews. All that's happening is that Bulid/Export Cache isn't working for this camera.
I am using 6.4 RC. I might try rolling back to 6.3. -
Taking old report even new report is uploaded on web - Cache problem?
Hello,
I have a problem. We are using remote server for development work.
We are working on web environment. The modules (forms & reports) will be uploaded to the unix server using web compilation links. Now I haveI uploaded (generate), a report (old) to the server. After some modifications new report is uploaded . When I run the report, it is always taking old one, instead of new.
Time stamp in the server is showing new one is uploaded.
I heard this could be because of cache problem on the server. If some one have faced the same , please let me know the solution for this?
Cheers
Ram KanalaFRM-40738 (repid is coming out null)
Looks like here it is not able to find the report
repid := find_report_object('ETOX_9ITEST');
In Forms builder confirm that you have defined a report by this name for the particular form (adding the report in REPORTS_PATH would not suffice). I think it should also point to corresponding report.
Please post the exact error messages as you see it with the error numbers.
Other comments.
2) Added a TNS_ADMIN in the registry to point to TNSNAMES.ORA in the apps directory. This Tnsnames.ora has an entry for report server name. The report server is up when seen in OEM console. But my report does not launch. <<For 9i you do not need to do this . This alias defining is for 6i. In 9i it is "auto discovery" in the subnet.
You can remove this. (You need Tnsnames.ora for DB aliases, I am talking about reports server...)
3) Same comment as above
8)Verify the Reports 9i Service properties and check the "Log On as" information. Be sure to specify on "This account" the user information with which the installation was made.
For Windows/2000:
- Start-> Settings-> Control Panel-> Administrative Tools-> Services
- Double click on the Reports Server
- Go to "Log On" tab
- Change on the "Log On As" section to "This account" and enter the userid/password of the
user that installed Reports Server
- Ensure that the user account where the Reports Server is installed has administrator privileges
and a default printer.
- Restart the Reports Server as the new user.
(For WinNT steps will be similar)
---Workaround till your issue is solved is ....
Start report server as a standalone server
rwserver server=server_name
Thanks
Ratheesh -
Cache problem when loading modules ?
Hello Flex friends !
I am currently experiencing a strange problem. It happens in
my production environment when using my web browser to access the
application. Here it is :
The main application is a menu on the left and a ModuleLoader
in the center. Regarding the menu's button I click, the
ModuleLoader will load either ModuleA, ModuleB, etc ... nothing's
special until then, everything works fine.
Then I update the ModuleA.swf and Module.swf files. Now when
I click the menu's button I get the old ModuleA and ModuleB.
This seems to be a cache problem, no ? I thought the browser
was going to compare the local ModuleA version and the server
ModuleA version and then load the server ModuleA version, but it
does not seem to work this way. So How does it work ? I'd like to
understand.
Thanks for your help guys !
OlivierHi,
Try this.
Step 1: Goto Details, Status get the IDoc number,and go to BD87 in R/3,place the cursor in the RED IDoc entroes in tRFC
queue thats under outbound processing and click on display the IDOC which is on the menu bar.
Step 2: In the next screen click on Display tRFC calls (will take you to SM58 particular TRFC call)
place the cursor on the particular Transaction ID and go to EDIT in the menu bar --> press 'Execute LUW'
(Display tRFC calls (will take you to SM58 particular TRFC call) ---> select the TrasnID ---> EDIT ---> Execute LUW)
Rather than going to SM58 and executing LUW directly it is safer to go through BD87 giving the IDOC name as it will take you
to the particular TRFC request for that Idoc.
OR
Directly go to SM58 > Give * / user name or background (Aleremote) user name and execute. It will show TRFCs to be executed
for that user. Find the particular TRFC (SM37 > Req name > TID from data packet with sysfail).select the TrasnID (SM58) --->
EDIT ---> Execute LUW
Thanks,
JituK -
I am facing a caching problem in the Web-Application that I've developed us
Dear Friends,
I am facing a caching problem in the Web-Application that I've developed using Java/JSP/Servlet.
Problem Description: In this application when a hyperlink is clicked it is supposed to go the Handling Servlet and then servlet will fetch the data (using DAO layer) and will store in the session. After this the servlet will forward the request to the view JSP to present the data. The JSP access the object stored in the session and displays the data.
However, when the link is clicked second time then the request is not received by our servlet and the cached(prev data) page is shown. If we refresh the page then request come to the servlet and we get correct data. But as you will also agree that we don't want the users to refresh the page again and again to get the updated data.
We've included these lines in JSPs also but it does no good:
<%
response.setHeader("Expires", "0");
response.setHeader("Cache-Control" ,"no-cache, must-revalidate");
response.setHeader("Pragma", "no-cache");
response.setHeader("Cache-Control","no-store");
%>
Request you to please give a solution for the same.
Thanks & Regards,
MohanHowever, when the link is clicked second time then the request is not received by our servlet Impossible mate.. can you show your code. You sure there are no javascript errors ?
Why dont you just remove your object from the session after displaying the data from it and see if your page "automatically" hits the servlet when the link is clicked.
cheers..
S -
Reports Builder has encountered a problem and needs to close.
Hi All,
The report builder in my system is behaving weirdly. If I open a report, which is having a normal browser output, it allowing me to see the data model, layout model and everything. And if I open a report, which is having a PDF output, it's not allowing me to open the Layout.
Here the output is in context of Oracle Applications front-end. And as I know it shouldn't be a problem with Report Builder.
It errors out with the following message.
Reports Builder has encountered a problem and needs to close. We are sorry for the inconvenience.
And the error report contains the following message.
AppName: rwbld60.exe AppVer: 3.0.0.0 ModName: mmc60.dll
ModVer: 0.0.0.0 Offset: 0000c533
Here is the version information:
Report Builder 6.0.8.11.3
ORACLE Server Release 8.0.6.0.0
Oracle Procedure Builder 6.0.8.11.0
Oracle ORACLE PL/SQL V8.0.6.0.0 - Production
Oracle CORE Version 4.0.6.0.0 - Production
Oracle Tools Integration Services 6.0.8.10.2
Oracle Tools Common Area 6.0.5.32.1
Oracle Toolkit 2 for Windows 32-bit platforms 6.0.5.35.0
Resource Object Store 6.0.5.0.1
Oracle Help 6.0.5.35.0
Oracle Sqlmgr 6.0.8.11.3
Oracle Query Builder 6.0.7.0.0 - Production
PL/SQL Editor (c) WinMain Software (www.winmain.com), v1.0 (Production)
Oracle ZRC 6.0.8.11.3
Oracle Express 6.0.8.3.5
Oracle XML Parser 1.0.2.1.0 Production
Oracle Virtual Graphics System 6.0.5.35.0
Oracle Image 6.0.5.34.0
Oracle Multimedia Widget 6.0.5.34.0
Oracle Tools GUI Utilities 6.0.5.35.0
I have tried reinstalling the software a minimum of 10 times, but no luck. At last I got my system formatted and tried. For the first time it worked for me, then I installed Forms Builder... then again I am getting the same problem even though I have different homes for Reports and Forms.
Is it problem with my system or the Operating System or the version of the Oracle Developer?
Can anyone please help me out?
Thanks in advance,
Lakshman.I have been having a similar problem and do not know if this will apply to your situation but I found the following on Metalink.
Subject: Reports Builder Crashes Opening the Paper Layout on Large Reports on XP/2003
Doc ID:
Note:566304.1 Type: PROBLEM
Last Revision Date: 18-AUG-2008 Status: PUBLISHED
In this Document
Symptoms
Changes
Cause
Solution
References
Applies to:
Oracle Reports Developer - Version: 10.1.2.0.2 to 10.1.2.3
Microsoft Windows XP
Microsoft Windows Server 2003
Symptoms
Oracle Reports Builder crashes opening the Paper Layout on some large reports but ONLY on the following platforms:
Microsoft Windows XP Professional
Microsoft Windows Server 2003
Reports Builder displays the below message and then crashes:
Reports Builder
Reports Builder has encountered a problem and needs to close. We are sorry for the inconvenience.
If you were in the middle of something, the information you were working on might be lost.
Please tell Microsoft about this problem.
We have created an error report that you can send to us. We will treat this report as confidential and anonymous.
To see what data this error report contains, click here.
[Send Error Report] [Don't Send]
--> click on [Don't Send] button and Report Builder disappears.
-and/or-
access violation in oranls10.dll
This problem does not reproduce on:
Microsoft Windows 2000
The solutions in the below note do not work on v. 10.1.2.2.0 and 10.1.2.3.0:
Note 361206.1 Title: Reports Builder Crashes Opening a Paper Layout on Windows XP Only
Changes
Upgraded to Oracle Report Builder v. 10.1.2.x.x from a previous release.
Cause
Bug 7014048 is fixed in one-off Patch 5335195 (Base Bug) but missed inclusion (ie. was not ported to) in latest Developer patchset v. 10.1.2.3.0
See also:
Bug 7014048 Abstract: REPORTS BUILDER CRASHES OPENING A PAPER LAYOUT.
Bug 5029530 Abstract: REPORTS BUILDER CRASHES OPENING PAPER LAYOUT MODEL ONLY ON XP BOX
Solution
To implement the solution, please execute the following steps:
1. Download one-off Patch 5335195 for v. 10.1.2.0.2 to the XP/2003 box.
2. Back up existing uiw.dll for 10.1.2.3 880KB 10/10/2006 7:23PM --> uiw_orig_10.1.2.3.zip
3. In the %ORACLE_HOME%\bin directory, replace uiw.dll (880KB) with the uiw.dll (1,316KB) from
Patch 5335195 --> 1,316KB 7/24/06 11:12PM
4. Re-launch Report Builder 10.1.2.3.0 on XP/2003.
5. Open the <large report>.rdf.
6. Double-click on the Paper Layout icon.
7. The result should be: Successfully open the Paper Layout for <large report>.rdf.
References
Bug 5029530 - REPORTS BUILDER CRASHES OPENING PAPER LAYOUT MODEL ONLY ON XP BOX
Bug 7014048 - REPORTS BUILDER CRASHES OPENING A PAPER LAYOUT.
Note 361206.1 - Reports Builder Crashes Opening a Paper Layout on Windows XP Only
Patch 5335195 - REPORTS BUILDER CRASHES OPENING PAPER LAYOUT MODEL DUE TO ACCESS -
Bridge update does not fix caching problems.
Dear Adobe,
The 5.0.1.23 update for Bridge CS6 does NOT fix the problem of constantly re-caching layered TIF files.
I originaly reported the problem here on May 16, 2012.
http://forums.adobe.com/thread/1007560
At that time I also submitted a bug report via photoshop.com, and received an e-mail response from Adobe support confirming the problem had been replicated in their lab and promising a fix in the next update.
I've since tracked several other reports of this bug and related cache problems.
I assume that, at best, we will have to wait another 6 months or more for the next update. How can I assure this bug will be addressed?redcrown on guard wrote:
The 5.0.1.23 update for Bridge CS6 does NOT fix the problem of constantly re-caching layered TIF files.
At that time I also submitted a bug report via photoshop.com, and received an e-mail response from Adobe support confirming the problem had been replicated in their lab and promising a fix in the next update.
Thank you for this bit of information. Maybe it means I can stop the deactivations/uninstall/reinstall/reactivate cycle to try yet another solution. And hopefully, this will stop the re-caching problem with other than tif files.
regards
*S* -
Caching problem in Chrome and Firefox
Hey folks,
I ran into a weird problem. I created a video player based on the Strobe Media Playback. I added a couple of plugins. This player is used to watch progressive download FLV files.
I ran into the following issue. I watch part of a video. I select another one. Then I select the previous one again. Only the cached portion of the first video is shown. The entire video will not be downloaded again from the server, but only the portion already cached on the client.
This problem is really bad in Chrome. When I restart FF, I can watch the entire video. Not in Chrome. The only way to solve this in Chrome is to clear the cache.
Any ideas?
The website is live, so you can test this yourselve. http://www.submergeproductions.com/videos.aspx
All help very welcome, because this is a major issue.
Follow up. I made a quick fix. I added a random number to the FLV url to force a redownload from the server, but this quite a dirty fix. I would rather have a restart/continuation of the download if the file was only partially downloaded.
Thanks,
PeterHi Silviu,
the reason why it works now is because I uploaded a modified version. I append "?<random number> to the URL. That prevents caching problems because the browser hasn't got the version cached. But I still report it as a bug.
Peter -
Hi guys
We are facing this problem of caching within our project. The project aims to generate a html code to pick up some rich media ads details at random and displaying on the html file where the generated code is expected to be pasted. We developed two servlets, one which extracts the ads from the database randomly and then depanding on the ad type it calls the other servlet as src of an iframe, which in turn puts all code for displaying the rich media ads. The script which we are generating for the user to paste onto their pages is:
<script LANGUAGE="JAVASCRIPT" src="http://192.168.1.6:8080/advert_java/servlet/GetAdServlet?region=1&zone=1&type=nossi&cachevar=yes">
</script>First servlet (GetAdServlet) returns the javascript statements and thus is called using this generated code. Now cotents of the iframe are supplied by the second servlet ie richMediaServlet. This servlet is called like
iframeURL = fullHttpDir+"/servlet/RichMediaServlet?";
iframeURL += "bannerCode="+ RNBanner (BannerCode to be called);
out.println("document.write(\"<iframe src='" + iframeURL + "' height=" + hheight +" width="+ wwidth + " SCROLLING=no FRAMEBORDER=0 MARGINWIDTH=2 MARGINHEIGHT=2 onfocus='window.focus(); return iframeFocus()'>\");");
out.println("document.write(\"</iframe>\");");This richmediaServlet returns HTML into <iframe>. when richmediaservlet is called, a parameter 'bannerCode' is passed. then richmediaServlet fatches the banner from the database and displays the banner into the <iframe>.
Now the problem comes when we run the html file containing the script tag mentioned above, and refresh our page, ideally it should pick the ads randomly and pass it on to RichMediaServlet.
I also try debugging both servlets. I called the getadservlet from javascript mentioned above and put debugging info in both the servlets, now for every refresh we do on the html side, we are getting a different random bannercode in adservlet but in richmedia when we print the bannercode received in querystring it is taking an older value which was displayed some time back and keeps on doing this for quiet a long time, making it look like some caching problem of RichMediaServlet.
Instead when we tried to put the same html <script> code into another servlet's doGet, everything seems to be working fine.
i have also used the following code to prevent the caching on both the setvlets
long currentTime = System.currentTimeMillis();
response.setHeader("Cache-Control", "no-cache, must-revalidate");
response.setHeader("Pragma", "no-cache");
response.setDateHeader("Last-modified", currentTime);
response.setHeader("Expires", "Sat, 6 May 1995 12:00:00 GMT"); and following in the iframe's head tag before the iframe tag in the getAdServlet.
out.println("document.write('<head>');");
out.println("document.write('<meta http-equiv=\"Cache-Control\" content=\"no-cache,must-revalidate\">');");
out.println("document.write('<meta http-equiv=\"Pragma\" content=\"no-cache\">');");
out.println("document.write('<meta http-equiv=\"Last-modified\" content=\""+ currentTime + "\">');");
out.println("document.write('<meta http-equiv=\"expires\" content=\"Sat, 6 May 1995 12:00:00 GMT\">');");
out.println("document.write('</head>');");I request you all geeks to try and help me to your best. The project is at its final stages and in high urgency now.i think the caching is being in the browser, with the iframe.
You should try passing a random param to the servlet in the iframe URL, something like:
var a = Math.random() * 10000000; //for example
out.println("document.write(\"<iframe src='" + iframeURL +"&rand="+a+"' height=" + hheight +" width="+ wwidth + " SCROLLING=no FRAMEBORDER=0 MARGINWIDTH=2 MARGINHEIGHT=2 onfocus='window.focus(); return iframeFocus()'>\");");
out.println("document.write(\"</iframe>\");");
...It should force the browser to ask for the servlet again
hope this helps...
Maybe you are looking for
-
ITunes 8.1 disabled music videos
My music videos stopped playing on screen after the latest upgrade. They can now only play as audio tracks but there is no video playback. However, when I change Media Kind from Music Video to Movie they play back as expected - both video and audio.
-
BADI to uncheck free item indicator (field UMSON) in PO through ME21N)
Hi All, I have a requirement to remove the UMSON ( free item ") indicator while creating a PO using ME21N transaction based on the input from user. If material starts with 'P' and it is a standard order, then this field should be unchecked for that p
-
Everything was working just fine until I updated my iTunes to the latest version (11.1.3). I'm using a PC with Windows 7. I have an iPhone 5 and a couple of different iPod. Now all of them wont connect to iTunes. I've tried everything, uninstalling,
-
Hi, has anybody gotten an EJB3 SSB running as Adapter Module? The sample is just a 1.4 Bean. Also the EJB3-s are not placed under the localejbs-context by default, so the lookup is not working. I't would be great if someone could shed some light on t
-
HT4623 why i cannot open my iphone?
why i cannot open my iphone?