Coherence and database backend updates

Hi
I am new to coherence, I liked the features of Coherence replicated cache, cache through etc..
My Question is if I am using Coherence with cache through and partitioned caching and I have a back end update on data through a oracle database stored procedure how the coherence cache get the latest data changed by the stored procedure. Is there any event driven mechanism to invalidate the cache to reload the data or it is not a good practice in these scenario.
Rgds
Anil

Hi Anil,
it really depends on what you need to achieve.
There is a very good wiki which describes most of the things you can do with Coherence at the url: http://wiki.tangosol.com/display/COH33UG/Coherence+3.3+Home
However, since you have your existing database model which you want to retain because you want the data still reside in the database, depending on the consistency requirements you might not be totally free in representing data in Coherence.
The best feature of Coherence to significantly reduce the load on the database is the write-behind cache.
Write-behind functionality allows you to coalesce multiple updates to the same DB row into a single update as data is written out only after a certain amount of time thereby combining the changes from multiple updates to a single one.
It also allows ripe updates to multiple cached entries for which the primary copies reside in the same cache node to be written out in the same database operation (preferably in batch mode).
Due to these behaviors write-behind has a profound effect on write-heavy applications.
However that way of operation requires that for any logic that needs to query consistently from the data-set and all operations changing the data-set go to the cache, because the database is not guaranteed to be consistent. Therefore it might not be good for you.
Another approach is that if you want to do your DB changes directly in the DB, you can simply cache data in whatever structures that suit your access patterns in a read-through cache, and if there are any changes to the database you invalidate entries which are stale.
The cache structures can be whatever which you choose appropriate to your logic, you can cache single entries, you can cache entire top-down object hierarchies, you can cache query results keyed by the query parameters.
The point is that you are free to choose the most appropriate structure of what to cache as opposed to the caching features of other frameworks which choose the caching structures to be aligned to their classes and not your needs.
Just keep in mind that without doing serious locking (which adversely affects both read and write performance), between reading any two or more entries from the cache a change might have occurred to one or more of those entries. This means that when using multiple entries from the cache, there might not be any transaction-set in the database which contains all entries in the state which you were getting them.
So if you need any such guarantees, then the data you need such guarantees on must reside in a single cache entry and that cache entry must have been retrieved from the database with a transaction which provides those guarantees at all (if you read data from the database with READ_COMMITTED isolation and with multiple queries, then you don't get that consistency even from the database, as some of the entries read by the previous operations in the transaction might have been overwritten when another transaction committed before subsequent read operations in your transaction).
There can be other approaches as well.
It really all depends on your access patterns and without knowing more about that it is hard to suggest the correct solution.
Best regards,
Robert

Similar Messages

  • Using Coherence and Oracle Database as the CacheStore

    We are working on implementing a solution using Coherence and Oracle Database as the CacheStore. We initially implemented the Cache as a distributed-scheme which in turn uses the backing-map-scheme. We are trying to introduce transaction management and I used a scheme-ref in a transactional-scheme to point to an already existing distributed-scheme. However when I bring up the server, my custom coherence-cache-config.xml file is not recognized and Coherence comes up with the default setting. Given below is the snippet of my configuration file.
    1)     I would like to understand why the below configuration doesn’t work and am I doing it the right way? If not, what is the correct way of doing it?
    2)     There are a multiple transaction management options given in the documentation. Which are the ones that will work with a distributed-scheme and read-write-backing-map-scheme?
    3)     If transactional-schemes cannot work with distributed-scheme, what is the best way to have a distributed cache with a oracle database as a cache store?
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-name>id<cache-name>
    <scheme-name>example-transactional<scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <transactional-scheme>
    <scheme-name>example-transactional</scheme-name>
    <scheme-ref>distributedcustomcache</scheme-ref>
    <thread-count>10</thread-count>
    </transactional-scheme>
    <distributed-scheme>
    <scheme-name>distributedcustomcache</scheme-name>
    <service-name>DistributedCache</service-name>
    <backing-map-scheme>
    <read-write-backing-map-scheme>
    <internal-cache-scheme>
    <local-scheme>
    <!--scheme-ref>categories-eviction</scheme-ref-->
    <scheme-name>inMemory</scheme-name>
    </local-scheme>
    </internal-cache-scheme>
    <cachestore-scheme>
    <class-scheme>
    <class-name>spring-bean:coherenceCacheStore</class-name>
    <init-params>
    <init-param>
    <param-name>setEntityName</param-name>
    <param-value>{cache-name}</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    <!--refresh-ahead-factor>0.5</refresh-ahead-factor-->
    </read-write-backing-map-scheme>
    </backing-map-scheme>
    <autostart>true</autostart>
    </distributed-scheme>

    Hi,
    If you look at the documentation for transactional-scheme here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BHCIABHA
    you will see that it says The transactional-scheme element defines a transactional cache, which is a specialized distributed cache. That means that a transactional-scheme is already a distributed-scheme.
    You will see from the same documentation above that there is no way in a transactional-scheme to configure things like cache-stores or listeners or even the backing-map-scheme as these are not supported on a transactional-scheme - so you cannot use a cache store.
    Personally I would not use transactional-scheme unless you have some really big reason to do so - the restrictions far outweigh any perceived advantage of having a transaction. There are better ways to build applications so they do not require transactions, that is what we have been doing for years with Coherence so far, and there is no real reason to change that.
    JK

  • JDBC to IDOC scenario and database update the idoc number

    Hi SDNers,
    I am willing to pull data from a database table to generate IDOC in SAP and want to update the  DATABASE with IDOC number in a single scenario.
    Kindly suggest ?
    REGARDS!!
    SSR

    Hi,
        Please keep in mind that the idoc no generated in PI will be different than the one generated in SAP...for idoc...
    the correlation between the idocs in both systems is the message id...
    HTH
    Rajesh

  • How to use the mirrored and log shipped secondary database for update or insert operations

    Hi,
    I am doing a DR Test where I need to test the mirrored and log shipped secondary database but without stopping the mirroring or log shipping procedures. Is there a way to get the data out of mirrored and log shipped database to another database for update
    or insert operations?
    Database snapshot can be used only for mirrored database but updates cannot be done. Also the secondary database of log shipping cannot used for database snapshot. Any ideas of how this can be implemented?
    Thanks,
    Preetha

    Hmm in this case I think you need Merge Replication otherwise it breaks down the purpose of DR...again in that case.. 
    Best Regards,Uri Dimant SQL Server MVP,
    http://sqlblog.com/blogs/uri_dimant/
    MS SQL optimization: MS SQL Development and Optimization
    MS SQL Consulting:
    Large scale of database and data cleansing
    Remote DBA Services:
    Improves MS SQL Database Performance
    SQL Server Integration Services:
    Business Intelligence

  • NextnPrevious pages and database update

    Hi Folks,
    Am hoping someone can point me in the right direction on figuring this out...
    I have 120 individuals in the database with related fields. Instead of displaying 120 rows when I pull the data from the database, I've built Next/Previous pages for displaying them 30 to a page.
    Here's the thing: This is an admin section, pulling names and IDs and fields (text, radio buttons, dropdowns, etc.) for data entry, or rather, database update.
    So, after displaying 30 on the page you have a link on the bottom of the page to display the next 30. This page refreshes itself...
    Either I have to cache the update info from each page and put a submit button for updating on the last page or, submit each page (30 at a time) until each succeeding page is done, one at a time. See what I'm getting at? I think I'd rather be able to submit 30 at a time instead of trying to cache all the data as you move from page to page but am having a hard time trying to figure this out.
    I'm not sure what is best or just how to do the database insert/update since the Next/Previous CF code depends on the same page refreshing itself each time for the next pages.
    Also, the first page of this 'group of NextPrevious pages' is dependant on it's previous page of "select school", so that the first NextPrevious page (and following ones) displays the individuals from that particular school - 30 at a time.
    Can anyone point me in the right direction on this kind of database insert/update when used in conjunction with Next/Previous page links?
    Thanks!
    - ed

    Thank ya'll for the ideas...
    You know, before posting this I googled and googled and was surprised to find hardly anything addressing this. I would have thought it would be a fairly common way to perform edits on a large number of table rows - when you only want to display, say, 30 per page instead of 1200 records on one web page for admin edits.
    With Meensi's and Furnis' suggestions and, a bit more thought... I kept thinking... It's pretty much like a shopping cart - holding a variable(s) from page to page. I know there's many ways to do this but the problem here lies in the page refresh holding variables with each refresh.
    Seems logical to update All edits at one time - submit the form on the last page. I'm wondering, say as in the case you may have 1200 NextPrevious pages (!) - You'd want the user to do it in bits, page per page.
    Since a clicked link clears session variables... maybe make the page refresh link (Next Page) a submit button and, on refresh, submit just "this particular page variables" and at the same time, the refresh loads the Next Page variables - updating the database in increments and display the next batch to edit.(?)
    Does this make sense? I do want to make this a 'no-brainer' for the user - an admin section being most intuative is foremost to me...
    The following is (stripped down) code I'm using to run the NextPrevious pages. I think maybe if I add the database insert somewhere in the code at the top of the page, coming from a form submit button that refreshes the page at the same time - maybe this would be the way to do it.
    I'm not sure if there will be a problem wrapping the <form></form> around just the current variables on This Page. Maybe not. I think I may experiment with this, though...
    Thank for your input on this.
    Does seem like something like this, updates within NextPrevious pages would be a common practice and an easy fix doesn't it?
    - ed
    <!-- application with session variables and headersecure placed here -->
    <!-- Per above - Put code here for database insert - An "if this page comes from the submitted form on this page" -->
    <!-- The following is, more or less, ThisPage.cfm -->
    <!-- Start displaying with record 1 if not specified via url -->
    <CFPARAM name="start" default="1">
    <!-- Number of records to display on a page -->
    <CFPARAM name="disp" default="40">
    <CFSET SchoolNameDropdown_z=structNew() />
    <CFSET structAppend(#SchoolNameDropdown_z#, URL) />
    <CFSET structAppend(#SchoolNameDropdown_z#, Form) />
    <cfquery DATASOURCE="#application.dsn#" name="search">
    etcetera
    </cfquery>
    <CFSET end=Start + disp>
    <CFIF start + disp GREATER THAN data.RecordCount>
      <CFSET end=999>
    <CFELSE>
      <CFSET end=disp>
    </CFIF>
    <head>
    <link href="global.css" rel="stylesheet" type="text/css" />
    </head>
    <BODY>
    <html>
            <cfset bgcolor = background_table_color_1>
      <cfoutput query="search" startrow="#start#" maxrows="#end#">
       <cfif bgcolor eq background_table_color_2>
        <cfset bgcolor = background_table_color_1>
       <cfelse>
        <cfset bgcolor = background_table_color_2>
       </cfif>
          #TRIM(variable01)#
          #TRIM(variable02)#       
            <input type="radio" name="Consent" value="Yes" <cfif Consent IS "Yes">checked</cfif>> Yes
            <input type="radio" name="Consent" value="No"  <cfif Consent IS "No">checked</cfif>>No
            <input type="radio" name="Assent" value="Yes" <cfif Consent IS "Yes">checked</cfif>> Yes
            <input type="radio" name="Assent" value="No"  <cfif Assent IS "No">checked</cfif>>No
            </cfoutput>
    <CFOUTPUT>
    <!-- Display prev link -->
      <CFIF start NOT EQUAL 1>
      <CFIF start GTE disp>
        <CFSET prev=disp>
        <CFSET prevrec=start - disp>
      <CFELSE>
        <CFSET prev=start - 1>
        <CFSET prevrec=1>
      </CFIF>
      <a href="ThisPage.cfm?start=#prevrec#"><<< Previous #prev# </a>
      </cfif>
    <!-- Display next link -->
    <CFIF end LT data.RecordCount>
      <CFIF start + disp * 2 GTE data.RecordCount>
        <CFSET next=data.RecordCount - start - disp + 1>
      <CFELSE>
        <CFSET next=disp>
      </CFIF>   
        <a href="ThisPage.cfm?start=#Evaluate("start + disp")#">Next #next# >>> </a>
      </cfif>
    </CFOUTPUT>
    </BODY>
    </HTML>
    <!-- end -->

  • Junk and Virus database not updating.

    I have 10.4.11 installed and am running mail. In the filters tab I have the box checked to:
    Update the Junk mail and virus database [1] time(s) every day.
    and the "Last update" is "not available"
    The server has been up for several days and has not updated the database.
    how can I get the database to update?
    Thanks
    KRR

    Aimee-
    Until you 'commit' from the session you have issued the 'delete' statement, the other sessions will not be able to see the changes. In other words, the other sessions are getting you a read consistent view of the data.

  • API and Database updates

    Greetings!
    I've searched the archives for the forum and I can't find anything relating to my issue, so hopefully someone will have an idea of how I can get started on a new project.
    I have an SQL database that contains a row for each hour of each day. Every row contains a date (obviously for every one day there are 24 rows), an hour interval (1:00pm), the day of the month, the day of the week (Saturday), the Meridian, the Military time conversion (13:00), the week of the year, etc. This database is used for reporting purposes and it runs out of rows as of next week. I have been tasked with adding enough rows to last us another 2 years, which is about 17,000 rows.
    My colleague mentioned that I might be able to find what I need in the API and then write a little Java program that interfaces with my database to update the rows. Being completely new, I'm a little unsure as to how to go about researching this.
    I did poke in the API and I found the DateFormat class in the java.text package. It seems to have just about everything I need except for the military conversion.
    Does anyone have any thoughts on how I could use this class to write out these future rows to the database? One thing I should mention is that each row has a unique row-identifier that increments sequentially.
    If I have been unclear, please let me know and I will try to clarify. Any help or a nudge in the right direction would be greatly appreciated!
    Thank you!

    Java can acces SQL databases through JDBC. Here is a
    JDBC tutorial that can help you get started:
    http://java.sun.com/docs/books/tutorial/jdbc/index.htm
    i have looked at this tutorial yet there is a problem that i cant really deal with:-
    * am using net beans (windows xp), mysql server, and j/connector driver.
    * in net beans, in the RealTime Tab i can access my database in MySQL, but every time i write my code (in java)so i can connect with it and manipulate its tell me :
    SQLException: No suitable driver
    SQLState: 08001
    VendorError: 0
    my question is why can i access my database through 'Databases' in 'RealTime' tab, and not be able to make a simple connection with it when i run a simple code like this:
    import java.sql.Connection;
    import java.sql.DriverManager;
    import java.sql.SQLException;
    public class LoadDriverP {
    public static void main(String[] args) {
    try {
    Class.forName("com.mysql.jdbc.Driver");
    } catch (Exception ex) {
    // handle the error
    try {
    Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/world"," user", "pass");
    // Do something with the Connection
    } catch (SQLException ex) {
    // handle any errors
    System.out.println("SQLException: " + ex.getMessage());
    System.out.println("SQLState: " + ex.getSQLState());
    System.out.println("VendorError: " + ex.getErrorCode());
    could it be anything to do with CLASSPATH if it is please tell me what is this CLASSPATH and how can i change it.
    any help would be greatly apreaciated.

  • Re: what is difference between sap locking and database locking

    hi,
        what is difference between sap locking and database locking. Iam locked the table mara by using lock objects.
    But iam unable to unlock the mara table. I give u the coding. Please check it.
    REPORT zlock .
    CALL FUNCTION 'ENQUEUE_EZTEST3'
    EXPORTING
       MODE_MARA            = 'S'
       MANDT                = SY-MANDT
       MATNR                = 'SOU-1'.
    call transaction 'MM02'.
    CALL FUNCTION 'DEQUEUE_EZTEST3'
         EXPORTING
              mode_mara = 'E'
              mandt     = sy-mandt
              matnr     = 'SOU-1'.
    IF sy-subrc = 0.
      WRITE: 'IT IS unlocked'.
    ENDIF.

    Hi Paluri
    Here is the difference between SAP locks and Database locks, i will try to find the solution to your code.
    Regards
    Ashish
    Database Locks: The database system automatically sets database locks when it receives change statements (INSERT, UPDATE, MODIFY, DELETE) from a program. Database locks are physical locks on the database entries affected by these statements. You can only set a lock for an existing database entry, since the lock mechanism uses a lock flag in the entry. These flags are automatically deleted in each database commit. This means that database locks can never be set for longer than a single database LUW; in other words, a single dialog step in an R/3 application program.
    Physical locks in the database system are therefore insufficient for the requirements of an R/3 transaction. Locks in the R/3 System must remain set for the duration of a whole SAP LUW, that is, over several dialog steps. They must also be capable of being handled by different work processes and even different application servers. Consequently, each lock must apply on all servers in that R/3 System.
    SAP Locks:
    To complement the SAP LUW concept, in which bundled database changes are made in a single database LUW, the R/3 System also contains a lock mechanism, fully independent of database locks, that allows you to set a lock that spans several dialog steps. These locks are known as SAP locks.
    The SAP lock concept is based on lock objects. Lock objects allow you to set an SAP lock for an entire application object. An application object consists of one or more entries in a database table, or entries from more than one database table that are linked using foreign key relationships.
    Before you can set an SAP lock in an ABAP program, you must first create a lock object in the ABAP Dictionary.

  • I just want itunes music and database to be on external HD not internal

    What am I doing wrong. I have itunes music library on my external HD and my itunes preferences say that my itunes music folder location is just that. It's about 41 GB. Ok, but why on my system drive is there an itunes folder in the default user music folder and it is 7 GB and there are certain titles in there that did not make it to the external. The database files are in there as well. Shouldn't everything be located in the external itunes music folder including the database files ? How do I have itunes automatically update the external HD folder and put the database files in there as well and stop using the internal system drive for storage and database.

    I did that a while ago. But now it seems there are 7 GB of stray files. Is there a sync feature that moves only the itunes files that are outside the designated itunes folder, in my case they are living on the system drive for some reason...

  • ERROR: Exception occured while encrypting the configuration and database

    I'm facing below issue/error during the OIM 11g R2 configuration (fresh install).  Resolutions from other blog with same error (DOMAIN_HOME misconfigured) isn't helping in my case.
    Thanks for your help
    updateMLSLocale:ORACLE_HOME :/fmw/Oracle_IDM1
    updateMLSLocale:LOCALE_PROPERTIES_FILE :/fmw/Oracle_IDM1/inventory/Scripts/ext/jlib/oim/OIMLocales.properties
    java.lang.Exception: Exception occured while encrypting the configuration and database
      at oracle.as.install.oim.config.util.EncryptConfigurationAndDB.encryptConfigurationAndDatbase(EncryptConfigurationAndDB.java:239)
      at oracle.as.install.oim.config.OIMConfigManager.encryptDB(OIMConfigManager.java:1035)
      at oracle.as.install.oim.config.OIMConfigManager.configureOIM(OIMConfigManager.java:891)
      at oracle.as.install.oim.config.OIMConfigManager.doExecute(OIMConfigManager.java:583)
      at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:371)
      at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
      at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
      at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
      at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:64)
      at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:160)
      at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
      at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
      at java.lang.Thread.run(Thread.java:662)
    Caused by: java.lang.Exception: Exception occured while encrypting the database
      at oracle.as.install.oim.config.util.EncryptDataBase.encryptDBContent(EncryptDataBase.java:159)
      at oracle.as.install.oim.config.util.EncryptConfigurationAndDB.encryptConfigurationAndDatbase(EncryptConfigurationAndDB.java:230)
      ... 12 more
    Caused by: java.lang.Exception: Exception occured in updateMLSLocale method while updating Locale to OIM DB
      at oracle.as.install.oim.config.util.EncryptDataBase.updateMLSLocale(EncryptDataBase.java:318)
      at oracle.as.install.oim.config.util.EncryptDataBase.encryptDBContent(EncryptDataBase.java:125)
      ... 13 more
    Caused by: java.sql.SQLIntegrityConstraintViolationException: ORA-00001: unique constraint (DEV_OIM.UK_MLS_LOCALE_MLS_LOCALE_CODE) violated
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
      at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
      at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
      at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
      at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
      at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
      at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:217)
      at oracle.jdbc.driver.T4CPreparedStatement.executeForRows(T4CPreparedStatement.java:1115)
      at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)
      at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3769)
      at oracle.jdbc.driver.OraclePreparedStatement.executeUpdate(OraclePreparedStatement.java:3904)
      at oracle.jdbc.driver.OraclePreparedStatementWrapper.executeUpdate(OraclePreparedStatementWrapper.java:1512)
      at oracle.as.install.oim.config.util.EncryptDataBase.updateMLSLocale(EncryptDataBase.java:310)
      ... 14 more

    Hi
    I faced this issue before ,Reinstall is the option you have .Verify the version of RCU before you start creating schema .Set all Pre DB setting ,hostname and IP Address ,If DB and OIM  are in ifferent machines check pinging from both the sides .
    Please Drop all OLD schema ,Create a New "Prefix" for fresh installation , don't use old schema .
    Let me know .
    Thanks,
    Ari

  • Unique file name to be decoded and to be updated in a table along with data

    Hi
    I'm working on a File to Proxy scenario, where the file names(10 chars length) are unique. These files will be available for XI in a source directory. My requirement is -- file name need to be decoded into 3 values and to be updated into a r/3 database table along with the file data.
    Hope my requirement is clear.
    Thanks.

    You cannot see the field which stores the file name. The file name comes from payload at runtime.
    Secondly no need to create any input parametes in your UDF, just edit your udf and delete the input parameter (default input is 'a') so that you don't have to map any constant to this UDF just map this UDF to target field.
    e.g. UDF ---> target Field.
    I did the same but am not able to activate the mapping.
    Mapping activation Error:
    Activation of the change list canceled Check result for Message Mapping hello_mapping | http://briks.com: 
    Mapping not sufficiently defined
    Chennai.

  • Re: Workspace integration and Database Mapping

    Subject: Re: Workspace integration and Database Mapping
    >
    1. I aggree with you that this is a weak point of Forte. A possible workaround is
    to have a Workspace 'BugFixes' where you fix your bugs without affecting the rest
    of your code.
    1 - When you integrate a workspace into the repository it takes all the
    changes you have made in that workspace and 'saves' them to the
    repository. However, what if you have a workspace which has a half
    finished project as well as another project which you make a small but
    important change to. The small change must go back because other
    developers need the 'fix', but the half finished project will essentially be
    'broken' if it is integrated at this stage. Is there any way to integrate
    only specific projects? If not, how do you stop the half finished project
    being copied into the other developers' workspaces when they do an
    'Update'.
    Hi Forte'rs
    I just would like to express my deep appreciation for the way integrating
    workspaces works in Forte.
    This makes it a lot safer to work with, because you can only test the
    COMPLETE set of code and not just the 'few' changes you just made...
    I, at least, always make quite a few changes in a bunch of classes, so it would be
    a complete mess to try to sort out which changes to integrate and which to keep
    in my workspace only.
    Now I also have been frustrated not being able to make a quick (and dirty) fix - but
    in hindsight - it is clear that you cannot be sure that the fix works in the real world
    (you know: the stuff outside your own heavily modified workspace).
    So while I agree with everyone that got frustrated by not been able to integrate
    just a few changes, I am also happy that this is not allowed!
    may the forte be with you all
    Jens Chr Juul Jensen
    KAD/Denmark
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

    Subject: Re: Workspace integration and Database Mapping
    >
    1. I aggree with you that this is a weak point of Forte. A possible workaround is
    to have a Workspace 'BugFixes' where you fix your bugs without affecting the rest
    of your code.
    1 - When you integrate a workspace into the repository it takes all the
    changes you have made in that workspace and 'saves' them to the
    repository. However, what if you have a workspace which has a half
    finished project as well as another project which you make a small but
    important change to. The small change must go back because other
    developers need the 'fix', but the half finished project will essentially be
    'broken' if it is integrated at this stage. Is there any way to integrate
    only specific projects? If not, how do you stop the half finished project
    being copied into the other developers' workspaces when they do an
    'Update'.
    Hi Forte'rs
    I just would like to express my deep appreciation for the way integrating
    workspaces works in Forte.
    This makes it a lot safer to work with, because you can only test the
    COMPLETE set of code and not just the 'few' changes you just made...
    I, at least, always make quite a few changes in a bunch of classes, so it would be
    a complete mess to try to sort out which changes to integrate and which to keep
    in my workspace only.
    Now I also have been frustrated not being able to make a quick (and dirty) fix - but
    in hindsight - it is clear that you cannot be sure that the fix works in the real world
    (you know: the stuff outside your own heavily modified workspace).
    So while I agree with everyone that got frustrated by not been able to integrate
    just a few changes, I am also happy that this is not allowed!
    may the forte be with you all
    Jens Chr Juul Jensen
    KAD/Denmark
    To unsubscribe, email '[email protected]' with
    'unsubscribe forte-users' as the body of the message.
    Searchable thread archive <URL:http://pinehurst.sageit.com/listarchive/>

  • Coherence and EclipseLink - JTA Transaction Manager - slow response times

    A colleague and I are updating a transactional web service to use Coherence as an underlying L2 cache. The application has the following characteristics:
    Java 1.7
    Using Spring Framework 4.0.5
    EclipseLink 12.1.2
    TopLink grid 12.1.2
    Coherence 12.1.2
    javax.persistence 12.1.2
    The application is split, with a GAR in a WebLogic environment and the actual web service application deployed into IBM WebSphere 8.5.
    When we execute a GET from the server for a decently sized piece of data, the response time is roughly 20-25 seconds. From looking into DynaTrace, it appears that we're hitting a brick wall at the "calculateChanges" method within EclipseLink. Looking further, we appear to be having issues with the transaction manager but we're not sure what. If we have a local resource transaction manager, the response time is roughly 500 milliseconds for the exact same request. When the JTA transaction manager is involved, it's 20-25 seconds.
    Is there a recommendation on how to configure the transaction manager when incorporating Coherence into a web service application of this type?

    Hi Volker/Markus,
    Thanks a lot for the response.
    Yeah Volker, you are absolutely right. the 10-12 seconds happens when we have not used the transaction for several minutes...Looks like the transactions are moved away from the SAP buffer or something, in a very short time.
    and yes, the ABAP WP's are running in Pool 2 (*BASE) and the the JAVA server, I have set up in another memory pool of 7 GB's.
    I would say the performance of the JAVA part is much better than the ABAP part.
    Should I just remove the ABAP part of the SOLMAN from memory pool 2 and assign the JAVA/ABAP a separate huge memory pool  of say like 12-13 GB's.
    Will that likely to improve my performance??
    No, I have not deactivated RSDB_TDB in TCOLL from daily twice to weekly once on all systems on this box. It is running daily twice right now.
    Should I change it to weekly once on all the systems on this box?  How is that going to help me?? The only thinng I can think of is that it will save me some CPU utilization, as considerable CPU resources are needed for this program to run.
    But my CPU utilization is anyway only like 30 % average. Its a i570 hardware and right now running 5 CPU's.
    So you still think I should deactivate this job from daily twice to weekly once on all systems on this box??
    Markus, Did you open up any messages with SAP on this issue.?
    I remember working on the 3.2 version of soultion manager on change management and the response times very much better than this as compared to 4.0.
    Let me know guys and once again..thanks a lot for your help and valuable input.
    Abhi

  • Recreate SAP and database services in Windows

    Hi!
    I'm running my sandbox in a wmware enviroment and yesterday the C: partion got corrupted and I managed to "save" the other drives where the SAP and database are installed.
    Is there any way to recreate the SAP and database services in another Windows installation? (manually or automatically)
    That way I don't need to reinstall the whole system....and no there is no backup or snapshot since the system is not complete.
    But if I get the database to start I should be able to do a backup and then do a new installation and import the backup!?
    Thanks for any input
    rollo

    What is the database platform?
    If it is SQL Server try attaching the data & log files to another instance SQL instance at the same release & patch level, if that works the database is intact so recovery should be possible.
    From there rebuild your vmware box and treat the SAP install as though you are doing a 'system copy' so -
      -Install & patch Win2003
      -Install & patch SQL Server
      -Install a 'blank' SAP Central Instance & update kernel
      -Attach SQL database
      -Use the SAP's SQL Migration tools to prepare DB & instance
      -cross fingers & startsap
      -make a backup
    Check out http://servcie.sap.com/instguides for the System Copy guide that matches your release
    If that helps please reward points
    Cheers
    Danny

  • The product version and database version are not compatible

    The following simple program gets an exception {The product version and database version are not compatible} its very hard to proceed from here. Does anybody know what cause this?  
    Best Regards
    Jan Isacsson
    using System.Collections.ObjectModel;
    using Microsoft.MasterDataServices.Deployment;
    using Microsoft.MasterDataServices.Services.DataContracts;
    namespace MdsDeploy
        class Program
            static void Main(string[] args)
                try
                    ModelReader reader = new ModelReader();
                    Collection<Identifier> models = reader.GetModels();
                    foreach (Identifier modelId in models)
                        Console.WriteLine(modelId.Name);
                catch (System.Exception ex)
                    Console.WriteLine("Error: " + ex.Message);
                Console.ReadKey();

    Hi Jan,
    For the error "The product version and database version are not compatible", as Emma said, the version number of the Service does not match the database schema version.
    In your scenario, which version of database are you using? Please note that MDS update required after SQL 2012 SP1 installation, please refer to the links below to see the details.
    http://byobi.com/blog/2012/11/mds-update-required-after-sql-2012-sp1-installation/
    http://msdn.microsoft.com/en-IN/library/gg488708.aspx
    Regards,
    Charlie Liao
    TechNet Community Support

Maybe you are looking for