Coherence 3.6.0 transactional cache and POF - NULL values

Hi,
We are trying to use the new transactional scheme defined in 3.6.0 and we encounter an abnormal behaviour. The code executes without any exception or warnings but in the cache we find the key associated with a NULL value.
To try to identify the problem, we defined two services (see cache-config below):
- one transactional cache
- one distributed cache
If we try to insert into transactional cache primitives or strings everything is normal (both key and value are visible using coherence console). But if we try to insert custom classes using POF, the key is inserted with a NULL value.
In same cluster we defined a distributed cache that uses the same POF classes/configuration. A call to put will succeed in any scenario (both key and value are visible using coherence console).
<?xml version="1.0"?>
<!DOCTYPE cache-config SYSTEM "cache-config.dtd">
<cache-config>
     <caching-scheme-mapping>
          <cache-mapping>
               <cache-name>cnt.*</cache-name>
               <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
          </cache-mapping>
          <cache-mapping>
               <cache-name>stt.*</cache-name>
               <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
          </cache-mapping>
     </caching-scheme-mapping>
     <caching-schemes>
          <transactional-scheme>
               <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
               <service-name>storage.transactionalcache.cnt</service-name>
               <thread-count>10</thread-count>
               <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    <init-params>
                         <init-param>
                              <param-type>String</param-type>
                              <param-value>cnt-pof-config.xml</param-value>
                         </init-param>
                    </init-params>
               </serializer>
               <backing-map-scheme>
                    <local-scheme>
                         <high-units>250M</high-units>
                         <unit-calculator>binary</unit-calculator>
                    </local-scheme>
               </backing-map-scheme>
               <autostart>true</autostart>
          </transactional-scheme>
          <distributed-scheme>
               <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
               <service-name>storage.distributedcache.stt</service-name>
               <serializer>
                    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                    <init-params>
                         <init-param>
                              <param-type>String</param-type>
                              <param-value>cnt-pof-config.xml</param-value>
                         </init-param>
                    </init-params>
               </serializer>
               <backing-map-scheme>
                    <local-scheme>
                         <high-units>250M</high-units>
                         <unit-calculator>binary</unit-calculator>
                    </local-scheme>
               </backing-map-scheme>
               <autostart>true</autostart>
          </distributed-scheme>
     </caching-schemes>
</cache-config>
Failing code (uses transaction APIs 3.6.0):
     public static void main(String[] args)
          Connection con = new DefaultConnectionFactory().createConnection("storage.transactionalcache.cnt");
          con.setAutoCommit(false);
          try
               OptimisticNamedCache cache = con.getNamedCache("cnt.t1");
               CId tID = new CId();
               tID.setId(11111L);
               C tC = new C();
               tC.setVal(new BigDecimal("100.1"));
               cache.insert(tID, tC);
               con.commit();
          catch (Exception e)
               e.printStackTrace();
               con.rollback();
          finally
               con.close();
Code that succeeds (but without transaction APIs):
     public static void main(String[] args)
          try
               NamedCache cache = CacheFactory.getCache("stt.t1");
               CId tID = new CId();
               tID.setId(11111L);
               C tC = new C();
               tC.setVal(new BigDecimal("100.1"));
               cache.put(tID, tC);
          catch (Exception e)
               e.printStackTrace();
          finally
And here is what we list using coherence console if we use transactional APIs:
Map (cnt.t1): list
CId {
id = 11111
} = null
Any suggestion, please?

Cristian,
After looking at your configuration I noticed that your configuration is incorrect. For a transactional scheme you cannot specify a backing-map-scheme.
Your config contained:
<backing-map-scheme>
<local-scheme>
<high-units>250M</high-units>
<unit-calculator>binary</unit-calculator>
</local-scheme>
</backing-map-scheme>To specify high-units for a transactional scheme, simply provide a high-units element directly under the transactional-scheme element.
<transactional-scheme>
    <scheme-name>small-high-units</scheme-name>
    <service-name>TestTxnService</service-name>
    <autostart>true</autostart>
    <high-units>1M</high-units>
</transactional-scheme>http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_transactionslocks.htm#BEIBACHA
The reason that it is not allowable to specify a backing-map-scheme for a transactional scheme is that transactional caches use their own storage.
I am not sure why this would work with primitives and only fail with POF. We will look into this further here and try to reproduce.
Can you please change your configuration with the above changes and let us know your results.
Thanks,
John
Edited by: jspeidel on Sep 16, 2010 10:44 AM

Similar Messages

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Order Of Null and Not Null Values while table creation

    We have to create a table with 10 fields .5 of them are Null and Not Null. Now my question what should be the order of fileds??.Means Null Fields will come first or Not Null.

    The only reason I can think of having the NULL columns at the end is storage space.
    To conserve space, a null in a column only stores the column length (zero). Oracle
    does not store data for the null column. Also, for trailing null columns, Oracle does
    not even store the column length.

  • JDO transaction, caching and stored procedures

    I currently retreive an existing object, update the object, and then call
    a stored procedure - all in one transaction. The issue I am running into
    is that the stored proc must have the modified data available when it
    selects the row (modified object) from the table. This is currently not
    happening. For example, if the initial field was null, then it is updated
    and the proc called, the proc only sees null. How do I specify that the
    changes should be processed w/in the transaction (not commited) and
    available for other processes reading the data in the same tx?
    Note: everything works perfect if I manually execute the update statement
    through a Statement on the connection for the stored proc. I have tried
    flush().
    Any ideas would be greatly appreciated!

    Here is a dumbed down version... I also tried doing a Query with JDO and
    that returns the modified value. Basically, the code below find the
    existing object, modifies a field, and then calls a stored proc to do some
    processing. Unfortunately, the stored proc needs to be able to "see" the
    modified data in the transaction.
    public void modifyFoo( UserToken userToken,
    Foo newFoo,
    Integer applicationId )
    PersistenceManager pm = null;
    Transaction tx = null;
    Integer applicationId = producerApplication.getId();
    try
    pm = JDOFactory.getPersistenceManager( userToken );
    tx = pm.currentTransaction();
    tx.begin();
    FooBoId appId = new FooBoId();
    appId.fooBolId = applicationId;
    FooBo appDbVo = (FooBo)pm.getObjectById( appId, true );
    appDbVo.setEffectiveDate( newFoo.getEffectiveDate() );
    String fooNumber = executeFooBar( userToken, applicationId
    appDbVo.setFooNumber( fooNumber );
    tx.commit();
    private String executeFooBar( UserToken userToken, Integer
    applicationId )
    throws PulsarSystemException, ValidationException
    String licenseNumber = null;
    try
    // Get a connection from the persistence manager
    Connection connection = JDOFactory.getConnection( userToken );
    connection.setAutoCommit( false );
    CallableStatement cs = connection.prepareCall( "{call
    sys_appl_pkg.fooBar(?,?,?,?,?,?)}" );
    // Register the type of the return value
    cs.registerOutParameter( 4, Types.NUMERIC );
    cs.registerOutParameter( 5, Types.NUMERIC );
    cs.registerOutParameter( 6, Types.VARCHAR );
    // Set parameters:
    cs.setString( 1, "Test" );
    cs.setInt( 2, applicationId.intValue() );
    cs.setInt( 3, 0 );
    catch( SQLException ex )
    ex.printStackTrace();
    return licenseNumber;
    Patrick Linskey wrote:
    Can you post the code that you're executing?
    -Patrick
    On Mon, 14 Jul 2003 20:38:09 +0000, Josh Zook wrote:
    I currently retreive an existing object, update the object, and then call
    a stored procedure - all in one transaction. The issue I am running into
    is that the stored proc must have the modified data available when it
    selects the row (modified object) from the table. This is currently not
    happening. For example, if the initial field was null, then it is updated
    and the proc called, the proc only sees null. How do I specify that the
    changes should be processed w/in the transaction (not commited) and
    available for other processes reading the data in the same tx?
    Note: everything works perfect if I manually execute the update statement
    through a Statement on the connection for the stored proc. I have tried
    flush().
    Any ideas would be greatly appreciated!
    Patrick Linskey
    SolarMetric Inc.

  • Checking null and not null values

    Hi!
    We have a job schedule table that has a column for each day.
    JOB_ID, TIME_ID, MO, TU, WE, TH, FR, SA, SU
    1                 1        X
    1                 2                        XSince the same job can be on different days and times, we need to check all the instances of
    that job.
    Sorry for the formatting - the X is an indicator; i.e. Wednesday and Saturday.
    How can we solve this?
    Thanks!

    Your requirements are not really clear.
    Perhaps this query could help:
    -- Your Data:
    with yourtable as
      select 1 job_id, 1 time_id, null mo, null tu, 'X' we, null th, null fr, null sa, null su from dual union all
      select 1 job_id, 2 time_id, null mo, null tu, null  we, null th, null fr, null sa, 'X' su from dual
    -- Query:
    select job_id, time_id, 'mo' day from yourtable where to_char(mo)='X' union all
    select job_id, time_id, 'tu' day from yourtable where to_char(tu)='X' union all
    select job_id, time_id, 'we' day from yourtable where to_char(we)='X' union all
    select job_id, time_id, 'th' day from yourtable where to_char(th)='X' union all
    select job_id, time_id, 'fr' day from yourtable where to_char(fr)='X' union all
    select job_id, time_id, 'sa' day from yourtable where to_char(sa)='X' union all
    select job_id, time_id, 'su' day from yourtable where to_char(su)='X'Edited by: hm on 29.07.2011 01:40
    Edited by: hm on 29.07.2011 01:41

  • Creating table with null and not null values.

    I have to create a table with 5 Null nd 5 Not Null fields.My questionis which fields are place first?Not or Not Null and why???

    What you mean is: the person who asked you the question thought the order of columns was important, and wanted to see if you agreed. Or maybe they were just asking the question to see whether you thought it was important (as a test of your understanding of the database).
    When I started out in Oracle I was told that it was good practice to have the mandatory columns first and then the optional columns. But if you do some research you find this: the impact of column ordering on space management is: empty nullable columns at the end of the table take up zero bytes whereas empty nullable columns at the middle of the table take up one byte each. I think if that saving really is important to you you need to spring for an additional hard drive.
    Besides, even if you do organise your columns as you suggest, what happens when you add an eleventh NOT NULL column to your table? It gets tacked on to the end of your table. Your whole neat organisation is blown.
    What does still matter is the positioning of large VARCHAR2 columns: they should be the last columns on your table in case they create chaining. Then any query that doesn't need the large columns can be satisfied without reading the chained block, something that can't be guaranteed if you've got columns after a VARCHAR2(4000) column. This doesn't apply to CLOBs, etc. which are stored separately anyway.
    Cheers, APC

  • How to query Database with Parameters  and configure null value response?

    Hi,
    1.When capture attributes from forms & after applying several logics, passing to a DB table using an API, how to get relevant values for a given parameter in another DB table ?
    2.When a DB table is queried, if the value does not exist, how to configure the response message ?
    Thanks.

    Okay, you've provided exactly what John S. asked for - and no more. This is helpful, but not enough. I think we're going to need a use case to understand exactly what you are asking. What should the user see? What does the user do next? What should happen in the database and in the application when the user does this?
    However, I'll try to read between the lines a bit, and get you part of the way there. To query the database with parameters in ADF BC, you need a View Object (VO). The simplest thing to do is create the SELECT command behind the VO with some bind variables and add the bind variables to your VO. At that point, you will get an ExecuteWithParameters operation in the Data Control. You can drag that operation onto a JSF page and it will give you an option to create a parameter form to let the user fill in the parameters to set the bind variables, and a button to execute the query with these values. Any table or form based on that same VO will show the selected data.
    A Trinidad or ADF Rich Faces table will have an attribute to let you define some text to show the user if no data was retrieved by the query. But there are other ways to determine if data was retrieved which you can use to control other ways to display this information. For instance, I have a page that has an outputText component that has a "rendered" attribute to show the text only when there was no data retrieved by a query.

  • Need help Take out the null values from the ResultSet and Create a XML file

    hi,
    I wrote something which connects to Database and gets the ResultSet. From that ResultSet I am creating
    a XML file. IN my program these are the main two classes Frame1 and ResultSetToXML. ResultSetToXML which
    takes ResultSet & Boolean value in its constructor. I am passing the ResultSet and Boolean value
    from Frame1 class. I am passing the boolean value to get the null values from the ResultSet and then add those
    null values to XML File. When i run the program it works alright and adds the null and not null values to
    the file. But when i pass the boolean value to take out the null values it would not take it out and adds
    the null and not null values.
    Please look at the code i am posing. I am showing step by step where its not adding the null values.
    Any help is always appreciated.
    Thanks in advance.
    ============================================================================
    Frame1 Class
    ============
    public class Frame1 extends JFrame{
    private JPanel contentPane;
    private XQuery xQuery1 = new XQuery();
    private XYLayout xYLayout1 = new XYLayout();
    public Document doc;
    private JButton jButton2 = new JButton();
    private Connection con;
    private Statement stmt;
    private ResultSetToXML rstx;
    //Construct the frame
    public Frame1() {
    enableEvents(AWTEvent.WINDOW_EVENT_MASK);
    try {
    jbInit();
    catch(Exception e) {
    e.printStackTrace();
    //Component initialization
    private void jbInit() throws Exception {
    //setIconImage(Toolkit.getDefaultToolkit().createImage(Frame1.class.getResource("[Your Icon]")));
    contentPane = (JPanel) this.getContentPane();
    xQuery1.setSql("");
    xQuery1.setUrl("jdbc:odbc:SCANODBC");
    xQuery1.setUserName("SYSDBA");
    xQuery1.setPassword("masterkey");
    xQuery1.setDriver("sun.jdbc.odbc.JdbcOdbcDriver");
    contentPane.setLayout(xYLayout1);
    this.setSize(new Dimension(400, 300));
    this.setTitle("Frame Title");
    xQuery1.setSql("Select * from Pinfo where pid=2 or pid=4");
    jButton2.setText("Get XML from DB");
    try {
    Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
    catch(java.lang.ClassNotFoundException ex) {
    System.err.print("ClassNotFoundException: ");
    System.err.println(ex.getMessage());
    try {
    con = DriverManager.getConnection("jdbc:odbc:SCANODBC","SYSDBA", "masterkey");
    stmt = con.createStatement();
    catch(SQLException ex) {
    System.err.println("SQLException: " + ex.getMessage());
    jButton2.addActionListener(new java.awt.event.ActionListener() {
    public void actionPerformed(ActionEvent e) {
    jButton2_actionPerformed(e);
    contentPane.add(jButton2, new XYConstraints(126, 113, -1, -1));
    //Overridden so we can exit when window is closed
    protected void processWindowEvent(WindowEvent e) {
    super.processWindowEvent(e);
    if (e.getID() == WindowEvent.WINDOW_CLOSING) {
    System.exit(0);
    void jButton2_actionPerformed(ActionEvent e) {
    try{
    OutputStream out;
    XMLOutputter outputter;
    Element root;
    org.jdom.Document doc;
    root = new Element("PINFO");
    String query = "SELECT * FROM PINFO WHERE PID=2 OR PID=4";
    ResultSet rs = stmt.executeQuery(query);
    /*===========This is where i am passing the ResultSet and boolean=======
    ===========value to either add the null or not null values in the file======*/
    rstx = new ResultSetToXML(rs,true);
    } //end of try
    catch(SQLException ex) {
    System.err.println("SQLException: " + ex.getMessage());
    ======================================================================================
    ResultSetToXML class
    ====================
    public class ResultSetToXML {
    private OutputStream out;
    private Element root;
    private XMLOutputter outputter;
    private Document doc;
    // Constructor
    public ResultSetToXML(ResultSet rs, boolean checkifnull){
    try{
    String tagname="";
    String tagvalue="";
    root = new Element("pinfo");
    while (rs.next()){
    Element users = new Element("Record");
    for(int i=1;i<=rs.getMetaData().getColumnCount(); ++i){
    tagname= rs.getMetaData().getColumnName(i);
    tagvalue=rs.getString(i);
    System.out.println(tagname);
    System.out.println(tagvalue);
    /*============if the boolean value is false it adds the null and not
    null value to the file =====================*/
    /*============else it checks if the value is null or the length is
    less than 0 and does the else clause in the if(checkifnull)===*/
    if(checkifnull){ 
    if((tagvalue == null) || tagvalue.length() < 0 ){
    users.addContent((new Element(tagname).setText(tagvalue)));
    else{
    users.addContent((new Element(tagname).setText(tagvalue)));
    else{
    users.addContent((new Element(tagname).setText(tagvalue)));
    root.addContent(users);
    out=new FileOutputStream("c:/XMLFile.xml");
    doc = new Document(root);
    outputter = new XMLOutputter();
    outputter.output(doc,out);
    catch(IOException ioe){
    System.out.println(ioe);
    catch(SQLException sqle){

    Can someone please help me with this problem
    Thanks.

  • NULL values and Data Control based on Web Service

    One of my ADF control is based on a Data Control created through a web service. Every thing is working fine except the way ADF control is handling the null values return by the web service based data control.
    For example for null columns the web service is sending the following:
    <ns0:beginDate xsi:nil="1"/>
    or
    <ns0:sourceCode xsi:nil="1"/>
    But the corresponding column in my ADF data control is trying to initialize itself using the value ‘[{nil=1}]’. It fails with the following error.
    2006-04-20 13:31:37.510 WARNING JBO-25009: Cannot create an object of type:java.util.Date with value:[{nil=1}]
    I will appreciate if someone could help me resolve this issue.
    Thanks,

    I tried again, but it seems that I'm unable to reproduce this in a test project which uses another data controls but is still as similar as possible to the original project in which I've encountered this behaviour.
    However, using a data control based on the same web service as in the original project, [{nil=1}] appears again instead of emtpy values.
    Is it possible that there is a significant difference in the generated wrapper classes? The underlying PL/SQL is the same (in strucutre) and the corresponding elements in the response XML of the web service are the same in the two cases, always like <ns0:someResponeValue xsi:nil="1"/>, so I don't know how it is possible that I can't reproduce this behaviour.
    A short description of the projects:
    Services/Model:
    I created a PL/SQL funcition in a package that returns a user type object. This return parameter consists of a non-empty string and a null value string. On top of this I created a PL/SQL web service and a data control for that.
    View/Controller:
    A JSF JSP page which has a read-only form showing the return values of the web service.
    Regards,
    Patrik

  • Transactions involving multiple caches and a database

    Hi all,
    I'm curious if the following is possible with the transaction support in Coherence.
    I have the need to write data to two caches and a database from within my Tomcat container, and the whole operation must be atomic.
    Example:
    Write to database (this is NOT via a CacheStore)
    Write to cache 1 (uses write-through to database)
    Write to cache 2 (uses write-through to database)
    If any operation fails, the whole transaction needs to rollback. This feels like an XA transaction, but it looks like it won't work like I expect because the cache must be the last resource, but I have two caches. The ordering of operations is not important.
    Thanks,
    Rob

    MagnusE wrote:
    When using write-through there is (as far as I know) no way to get a fully transactional behaviour (assuming you have more than one cache node) since each node is responsible for persisting its own data items (they will each use a separate connection to the database).
    If you on the other hand uses a "cache beside" pattern this can be made transactional using XA. As long as both caches belong to the same cache service they count as a single "last resource"....
    /MagnusYou also need to use the same transaction isolation and concurrency setting for it being the same last resource. Practically you should only have a single CacheAdapter instance enrolled in the same transaction.
    Best regards,
    Robert

  • Can I enable pof serialization for one cache and other JAVA serialization

    I had coherence cluster with few cache , Is there any way i can enable pof serialization for one cache and other to use normal JAVA serialization

    839051 wrote:
    I had coherence cluster with few cache , Is there any way i can enable pof serialization for one cache and other to use normal JAVA serializationHi,
    you can control serialization on a service-by-service basis. You can specify which serializer to use for the service with the <serializer> element in the service-scheme element corresponding to the service in the scache configuration file.
    Be aware, though, that if you use Coherence*Extend, and the service serializer configuration for the proxy service does not match the serializer configuration of the service which you are proxying to the extend client then the proxy node has to de- and reserialize the data which it moves between the service and the client.
    Best regards,
    Robert

  • How to put the data into cache and distribute to nodeusing oracle coherence

    Hi Friends,
    i am having some random number data writing into file,from that file i am reading the data and i want to put into cache,how can i put the data into cache and partition this data into different nodes ( machines) to caluculate like S.D,variance..etc..like that.(or how can i implement montecarlo using oracle coherence) if any one know plz suggest me with flow.
    Thank you.
    regards
    chandra

    Hi robert,
    i have some bulk data in some arraylist or object format,i want to put into cache.
    i am not able to put into cache.i am using put method like cache.put(object key ,object value) ,but its not allowing to put into cache.
    can you please help me.i m sending my code.plz go thru and tel me whr i did mistake.
    package lab3;
    import com.tangosol.net.CacheFactory;
    import com.tangosol.net.NamedCache;
    import com.tangosol.net.cache.NearCache;
    import java.io.File;
    import java.io.FileNotFoundException;
    import java.io.PrintWriter;
    import java.util.ArrayList;
    import java.util.List;
    import java.util.Random;
    import java.util.Scanner;
    import javax.naming.Name;
    public class BlockScoleData {
         * @param args
         * s=The spot market price
         * x=the exercise price of the option
         * v=instantaneous standard deviation of s
         * r=risk free instantaneous rate of interest
         * t= time to expiration of the option
         * n – Number of MC simulations.
         private static String outputFile = "D:/cache1/sampledata2.txt";
    private static String inputFile = "D:/cache1/sampledata2.txt";
    NearCache cache;
    List<Credit> creditList = new ArrayList<Credit>();
         public void writeToFile(int noofsamples) {
              Random rnd = new Random();
              PrintWriter writer = null;
              try {
                   writer = new PrintWriter(outputFile);
                   for (int i = 1; i <= noofsamples; i++) {
                        double s = rnd.nextInt(200) * rnd.nextDouble();
                        //double x = rnd.nextInt(250) * rnd.nextDouble();
                        int t = rnd.nextInt(5);
                        double v = rnd.nextDouble() ;
                        double r = rnd.nextDouble() / 10;
                        //int n = rnd.nextInt(90000);
                        writer.println(s + " " + t + " " + v + " "
                                  + r );
              } catch (FileNotFoundException e) {
                   e.printStackTrace();
              } finally {
                   writer.close();
                   writer = null;
    public List<Credit> readFromFile() {
    Scanner scanner = null;
    Credit credit = null;
    // List<Credit> creditList = new ArrayList<Credit>();
    try {
    scanner = new Scanner(new File(inputFile));
    while (scanner.hasNext()) {
    credit = new Credit(scanner.nextDouble(), scanner.nextInt(),
    scanner.nextDouble(), scanner.nextDouble());
    creditList.add(credit);
    System.out.println("read the list from file:"+creditList);
    } catch (FileNotFoundException e) {
    e.printStackTrace();
    } finally {
    scanner.close();
    credit = null;
    scanner = null;
    return creditList;
    // public void putCache(String cachename,List<Credit> list){
    // cache = CacheFactory.getCache ( "VirtualCache");
    // List<Credit> rand = new ArrayList<Credit>();
    public Object put(Object key, Object value){
    cache = (NearCache)CacheFactory.getCache("mycache");
    String cachename = cache.getCacheName();
    List<Credit> cachelist=new ArrayList<Credit>();
    // Object key;
    //cachelist = (List<Credit>)cache.put(creditList,creditList);
    cache.put(creditList,creditList);
    System.out.println("read to the cache list from file:"+cache.get(creditList));
    return cachelist;
         public static void main(String[] args) throws Exception {
         NearCache cache = (NearCache)CacheFactory.getCache("mycache");
              new BlockScoleData().writeToFile(20);
         //new BlockScoleData().putCache("Name",);
              System.out
                        .println("New file \"myfile.csv\" has been created to the current directory");
         CacheFactory.ensureCluster();
         new BlockScoleData().readFromFile();
    System.out.println("data read from file successfully");
         List<Credit> creditList = new ArrayList<Credit>();
    new BlockScoleData().put(creditList,creditList);
         System.out.println("read to the cache list from file:"+cache.get(creditList));
    //cache=CacheFactory.getCache("mycache");
    //mycacheput("Name",new BlockScoleData());
    //     System.out.println("name of cache is :" +mycache.getCacheName());
    //     System.out.println("value in cache is :" +mycache.get("Name"));
    //     System.out.println("cache services are :" +mycache.getCacheService());
    regards
    chandra

  • IMDB Cache and transaction logs

    Hi,
    We have installed the IMDB Cache as part of a proof of concept. We want to cache a large Oracle table (approx 900 million rows) into a read only local cache group and are finding the amount of space taken by transaction logs during the initial cache load operation exceeds the amount of disk space available. Is there a way to prevent transaction logging during the initial cache load? A failure during the initial load is acceptable for us as we can always reload the cache from the base Oracle table. We are using a datastore with 60GB of memory, however, the filesystem available is 273GB less the 120GB for the two datastore backing files, leaving approximately 150GB for transaction logs. To date we have only been able to load approximately 350 millions rows before failing with
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<802>, error_message: [TimesTen]TT0802: Data store space exhaustedThe datastore attributes we are using are
    [EntResPP]
    Driver=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so
    DataStore=/prod100/oradata/EntResPP
    LogPurge=1
    PermSize=60000
    TempSize=2000
    PLSQL=1
    DatabaseCharacterSet=AL32UTF8
    OracleNetServiceName=TRAQPP.worldThe command we use to load the cache is
    load cache group ro commit every 256 rows parallel 4Thanks
    Mark

    The replication agent is only involved if you have AWT cache groups or if you are using replication. If this is a standalone datastore with a readonly cache group then it is not necessary (or possible) to run the replication agent.
    The error message you mentioned is nothing to do with transaction log space. What has happenned is that the memory allocated ot the permanent data region within the datastore (where table data, indexes etc. reside) has become full (this corresponds to PermSize in your DSN attributes). This means you have not allocated enough memory in TimesTen to hold all the data. Be aware that there is typically significant storage space 'inflation' when caching data. This can range from 2x through to 5x or more. So, if the table data occupies a real 10 GB in oracle it will require between 20 and 50 GB in TimesTen.
    It is possible to suppress logging while loading the cache data (or at least it used to be prior to TT 11.2.1 - I haven't tied this in 11.2.1 myself). You'd do this as follows:
    1. Stop all application connections etc. to the datastore, stop cache and replication agents. make sure that the datastore is unloaded from memory.
    2. Change the value for 'Logging' in the DSN attributes to 0 and connect to the DSN using ttIsql as the instance administrator user.
    3. Start the cache agent. from the ttIsql session issue the command:
    load cache group ro commit every 0 rows;
    You have to use 0 (load entire group as single 'transaction' and you cannot use the 'parallel' clause.
    If this fails you may have to manually delete any rows that were loaded since TT cannot rollback.
    4. When the load has completed successfully, stop the cache agent and disconnect the ttIsql session.
    5. Change Logging back to 1 and reconnect as instance administrator from ttIsql. restart cache agent.
    6. Start applications etc. as required.
    Note that I would consider this at best a temporary workaround. Really, you need to ensure you have enough disk space to perform the load using logging. Of course, as I mentioned, the error you are getting right now is nothing to do with log disk space...
    Chris

  • Multiple caches and transactions...

    As I read the documentation it is possible for several caches (belonging to the same service) to participate in one XA transaction - is this correct or are there any problems / limitations one needs to be aware of?
    Note : I am here talking about partitioned caches with no cache loader (cache beside pattern).
    Best Regards
    Magnus

    Hi Magnus,
    Yes, it is possible - please take a look at this example: Q: How do I test the Transactional Cache?.
    Regards,
    Dimitri

  • External transaction control and ability to find new registered objects

    Hello, We are using Toplink with external transaction control and have a process inserting a complex hierarchy of objects. During the process we either do a registerObject or deepmergeClone depending on if the instance is already in the db. With external transaction control the registerObject does not actually do the commit to db until the global transaction (Container) issues the commit. Unfortunately we end up doing creating multiple instances of same objects ( because the assumption that registerObject would have written the row to the db ) with the same keys and when the container issues the commit we end up with duplicate key violation. Is there a way to find out if an object with a particular key is already registered?

    This sounds like the kind of question that can only be answered with a whiteboard and a good review of your architecture.
    In general, there should be no problem registering objects multiple times. I.e.,
    x = some object
    x1 = uow.registerObject(x);
    Then x1==uow.registerObject(x), and x1==uow.registerObject(x1), etc. When you register an object with the UOW, based on PK it'll always return that same one.
    Do you have multiple units of work on the go? (that may explain this behavior).
    In any case, I think the real problem here is that you're somehow registering objects that are no longer cached. I.e., some object is serialized or rebuilt and then registered after it's gone from the cache. By default, TopLink determines if an object is new or existing (to determine INSERT vs UPDATE) by hitting the cache. You can change this default behavior in the Mapping Workbench, open the advanced property for "Identity" and change existence checking to "check database". Although, this can be a slow and tedious process to have to keep hitting the DB.
    A little trick I use sometimes is to take advantage of the "readObject" API that will read the object from the databaes if it's not already in cache, and just return it from cache if it is in cache. Check out the UOW primer at http://otn.oracle.com/products/ias/toplink/index.html for more info, but the jist is that I would do this if I were you:
    x = some object that you're not sure is cached and you want to register in UOW;
    x' = uow.readObject(x);
    IF the object was in cache, you'd get back a working copy, nice and fast. IF it's not in cache, you hit the database, it goes in cache, and you get your working copy. Now you don't have to change the existence checking option which could slow everything down.
    - Don

Maybe you are looking for

  • Opening MS-DOS with Java

    I'm involved in a college programming class with Java and I'm stumped. I need to create a Java program that is able to open the MS-DOS prompt window and run a Java program in it as well as add a MS-DOS command afterwards in order to catch the output.

  • How Did I Burn This Weird Disk ?

    In another post I described how I had come across a weird DVD I had made a year ago that would not play in anything. When put in the Mac and double-clicked it displayed a Video_RM and a Video_TS folder - BOTH EMPTY! Yet on using "Get Info" it was rev

  • Reading the record and displaying when u select it using checkbox

    hi,        how to read the particilar record and display it in a popup screen when u select it using checkbox and then press 'DISP' button in alv(normal alv not with oops concepts). like i select a row which is checked (checkbox) it and i have to rea

  • APEX Fully Supported?

    We're considering converting our existing 10g Forms/Reports based apps to APEX, and I'm curious if we'll be fully supported by Oracle once we begin the migration? In other words, if we open an SR, will they work with us in providing the support neces

  • Soundtrack Pro & FCP....

    Ok been working on trying to solve this issue all day and I have come up with nothing. Does anyone know simply how to "Send a Multitrack" file from Soundtrack Pro back to FCP without having to export the mixed clip. I know if it was 1 audio clip I co