Multiple caches and transactions...

As I read the documentation it is possible for several caches (belonging to the same service) to participate in one XA transaction - is this correct or are there any problems / limitations one needs to be aware of?
Note : I am here talking about partitioned caches with no cache loader (cache beside pattern).
Best Regards
Magnus

Hi Magnus,
Yes, it is possible - please take a look at this example: Q: How do I test the Transactional Cache?.
Regards,
Dimitri

Similar Messages

  • Transactions involving multiple caches and a database

    Hi all,
    I'm curious if the following is possible with the transaction support in Coherence.
    I have the need to write data to two caches and a database from within my Tomcat container, and the whole operation must be atomic.
    Example:
    Write to database (this is NOT via a CacheStore)
    Write to cache 1 (uses write-through to database)
    Write to cache 2 (uses write-through to database)
    If any operation fails, the whole transaction needs to rollback. This feels like an XA transaction, but it looks like it won't work like I expect because the cache must be the last resource, but I have two caches. The ordering of operations is not important.
    Thanks,
    Rob

    MagnusE wrote:
    When using write-through there is (as far as I know) no way to get a fully transactional behaviour (assuming you have more than one cache node) since each node is responsible for persisting its own data items (they will each use a separate connection to the database).
    If you on the other hand uses a "cache beside" pattern this can be made transactional using XA. As long as both caches belong to the same cache service they count as a single "last resource"....
    /MagnusYou also need to use the same transaction isolation and concurrency setting for it being the same last resource. Practically you should only have a single CacheAdapter instance enrolled in the same transaction.
    Best regards,
    Robert

  • IMDB Cache and transaction logs

    Hi,
    We have installed the IMDB Cache as part of a proof of concept. We want to cache a large Oracle table (approx 900 million rows) into a read only local cache group and are finding the amount of space taken by transaction logs during the initial cache load operation exceeds the amount of disk space available. Is there a way to prevent transaction logging during the initial cache load? A failure during the initial load is acceptable for us as we can always reload the cache from the base Oracle table. We are using a datastore with 60GB of memory, however, the filesystem available is 273GB less the 120GB for the two datastore backing files, leaving approximately 150GB for transaction logs. To date we have only been able to load approximately 350 millions rows before failing with
    5056: The cache operation fails: error_type=<TimesTen Error>, error_code=<802>, error_message: [TimesTen]TT0802: Data store space exhaustedThe datastore attributes we are using are
    [EntResPP]
    Driver=/app1/oracle/product/11.2.0/TimesTen/ER/lib/libtten.so
    DataStore=/prod100/oradata/EntResPP
    LogPurge=1
    PermSize=60000
    TempSize=2000
    PLSQL=1
    DatabaseCharacterSet=AL32UTF8
    OracleNetServiceName=TRAQPP.worldThe command we use to load the cache is
    load cache group ro commit every 256 rows parallel 4Thanks
    Mark

    The replication agent is only involved if you have AWT cache groups or if you are using replication. If this is a standalone datastore with a readonly cache group then it is not necessary (or possible) to run the replication agent.
    The error message you mentioned is nothing to do with transaction log space. What has happenned is that the memory allocated ot the permanent data region within the datastore (where table data, indexes etc. reside) has become full (this corresponds to PermSize in your DSN attributes). This means you have not allocated enough memory in TimesTen to hold all the data. Be aware that there is typically significant storage space 'inflation' when caching data. This can range from 2x through to 5x or more. So, if the table data occupies a real 10 GB in oracle it will require between 20 and 50 GB in TimesTen.
    It is possible to suppress logging while loading the cache data (or at least it used to be prior to TT 11.2.1 - I haven't tied this in 11.2.1 myself). You'd do this as follows:
    1. Stop all application connections etc. to the datastore, stop cache and replication agents. make sure that the datastore is unloaded from memory.
    2. Change the value for 'Logging' in the DSN attributes to 0 and connect to the DSN using ttIsql as the instance administrator user.
    3. Start the cache agent. from the ttIsql session issue the command:
    load cache group ro commit every 0 rows;
    You have to use 0 (load entire group as single 'transaction' and you cannot use the 'parallel' clause.
    If this fails you may have to manually delete any rows that were loaded since TT cannot rollback.
    4. When the load has completed successfully, stop the cache agent and disconnect the ttIsql session.
    5. Change Logging back to 1 and reconnect as instance administrator from ttIsql. restart cache agent.
    6. Start applications etc. as required.
    Note that I would consider this at best a temporary workaround. Really, you need to ensure you have enough disk space to perform the load using logging. Of course, as I mentioned, the error you are getting right now is nothing to do with log disk space...
    Chris

  • Multiple submits and Transaction setting and isolation level ????

    From a Struts JSP,
    I submit the form n times.
    I have a session facade in place. The corresponding function has trnsaction attribute as "Required" and the isolation level as "serializable"
    As a business rule the record cannot be inserted more than once.
    But it is......
    The Facade calls the Controller which in turn calls the VOA which calls DAO.
    There is a find method which looks for that unique id if it exists it throws an Validation exception.
    There are two scenes:
    -----If I open two IE and submit the form it throws validation exception
    -----If I click on submit more than once from the same IE it inserts as many records.
    Can any one please put some light????
    But the inserts is happening

    Could you make the picture more clear, probably with some code snippets where the actual database handshake is taking place?
    Also whether the business rule is a part of database constraints or is it done at the application level. From your description it seems that it is being done at the application level. Check if the find method check is getting executed each time, or is it being bypassed under certain conditions.

  • What is the difference of specifying affinity to span multiple caches?

    What is the difference of methods between specifying affinity to span multiple caches and specifying it on a single cache?
    Can I do it in the same way?

    Thank you for reply.
    From the Docs of Coherence we can see that the data affinity is commonly referred to the related entries is contained within a single cache. The following is some fragmentss excerpted from Coherence Docs:
    "Data affinity describes the concept of ensuring that a group of related cache entries is contained within a single cache partition. This ensures that all relevant data is managed on a single primary cache node (without compromising fault-tolerance)."
    "Affinity may span multiple caches (as long as they are managed by the same cache service, which will generally be the case)."

  • Control multiple updates and queries within one transaction in JPA

    Hi,
    I have a question regarding control multiple updates and queries within one transaction. We are using EclipseLink 2.3.1. With below code, will I be able to:
    - have all insert, update, select queries committed in one transaction;
    - queryGetBalance will return the latest OrgBalance after update;
    - if one fails, everything rolls back.
    Thanks!
    Jeffrey
    PS: I realized that I cannot use em.getTransaction().begin() and em.getTransaction().commit(), since I am using JTA.
    =============
    @PersistenceContext(unitName="Test")
    EntityManager em;
    em.setFlushMode(FlushModeType.COMMIT);
    newTransaction.setAmount(1000);
    newTransaction.setType("check");
    em.persist(newTransaction);
    orgAudit.setUpdateUser("Joe")
    orgAudit.setupUpdateTime(time);
    em.merge(orgAudit);
    Query queryUpdateBalance = em.createQuery("update OrgBalance o set o.balance = o.balance + :amount where orgId = :myOrgId");
    queryUpdateBalance.setParameter("amount", 1000);
    queryUpdateBalance.setParameter("myOrgId", 1234);
    Query queryGetBalance = em.createQuery("select OrgBalance o where o.orgId = :myOrgId");
    queryGetBalance.setHint("javax.persistence.cache.storeMode", CacheStoreMode.REFRESH);
    queryGetBalance.setHint("javax.persistence.cache.retrieveMode", CacheRetrieveMode.BYPASS);
    queryGetBalance.getResultList();
    em.flush();
    Edited by: JeffreyW on Dec 12, 2011 10:34 AM

    Yes, the operation will be in a single transaction, assuming you are using a JTA managed SessionBean and the code is part of a SessionBean method.

  • Transactional Caches and Write Through

    I've been trying to implement the use of multiple caches, each with write through, all within a transaction.
         The CacheFactory.commitTransactionCollection(..) method only seems to work correctly if the first transactionMap throws an exception in the database code.
         If the second transactionMap throws exceptions, the caches do not appear to rollback correctly.
         I can wrap the whole operation in a JDBC transaction that rolls back the database correctly but the caches are not all rolled back because they are committed one by one?
         For example, I write to two transaction maps, each one created from separate caches. When commiting the transaction maps, the second transaction map causes a database exception. It appears the first transaction map has already committed its objects and doesn't roll back.
         Is it possible to use Coherence with multiple transaction maps and get all the caches and databases rolled back?
         I've also been trying to look at using coherence-tx.rar as described in the forums within WebLogic but I'm getting @@@@@ Failed to commit: javax.transaction.SystemException: Could not contact coordinator at null+SMARTPC:7001+null+t3+
         (SMARTPC being my pc name)
         Has anybody else had this problem? Bonus points for describing how to fix it!
         Mike

    The transaction support in Coherence is for Local     > Transactions. Basically, what this means is that the
         > first phase of the commit ("prepare") acquires locks
         > and ensures that there are no conflicts. The second
         > phase ("commit") does nothing but push data out to
         > the caches.
         This means that once prepare succeeds (all locks acquired), commit will try to copy local data into the base map. If there is a failure on any put, rollback will undo any changes made. All locks are cleared at the end.
         > The problem is that when you are using a
         > CacheStore module, the exception is occurring during
         > the second phase.
         If you start using a CacheStore module, then database update has to be part of the atomic procedure.
         >
         > For this reason, write-through and cache transactions
         > are not a supported combination.
         This is not true for a cache transaction that updaets a single cache entry, right?
         >
         > For single-cache-entry updates, CacheStore operations
         > are fully fault-tolerant in that the cache and
         > database are guaranteed to be consistent during any
         > server failure (including failures during partial
         > updates). While the mechanisms for fault-tolerance
         > vary, this is true for both write-through and
         > write-behind caches.
         For the write-thru case, I believe Database and cache are atomically updated.
         > Coherence does not support two-phase CacheStore
         > operations across multiple CacheStore instances. In
         > other words, if two cache entries are updated,
         > triggering calls to CacheStore modules sitting on
         > separate servers, it is possible for one database
         > update to succeed and for the other to fail.
         But once we have multiple CacheStore modules, then once one atomic write-thru put succeeds that means database is already updated for that specific put. There is no way to roll back the database update (although we can roll back the cache update). Therefore, you may end up in partial commits in such situations where multiple cache entries are updated across different CacheStore modules.
         If I use write-behind CacheStore modules, I can roll back entirely and avoid partial commits? Since writes are not immediately propagated to the database? So in essence, write-behind cache stores are no different than local transactions... Is my understanding correct?

  • Coherence 3.6.0 transactional cache and POF - NULL values

    Hi,
    We are trying to use the new transactional scheme defined in 3.6.0 and we encounter an abnormal behaviour. The code executes without any exception or warnings but in the cache we find the key associated with a NULL value.
    To try to identify the problem, we defined two services (see cache-config below):
    - one transactional cache
    - one distributed cache
    If we try to insert into transactional cache primitives or strings everything is normal (both key and value are visible using coherence console). But if we try to insert custom classes using POF, the key is inserted with a NULL value.
    In same cluster we defined a distributed cache that uses the same POF classes/configuration. A call to put will succeed in any scenario (both key and value are visible using coherence console).
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
         <caching-scheme-mapping>
              <cache-mapping>
                   <cache-name>cnt.*</cache-name>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
              </cache-mapping>
              <cache-mapping>
                   <cache-name>stt.*</cache-name>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
              </cache-mapping>
         </caching-scheme-mapping>
         <caching-schemes>
              <transactional-scheme>
                   <scheme-name>storage.transactionalcache.cnt.scheme</scheme-name>
                   <service-name>storage.transactionalcache.cnt</service-name>
                   <thread-count>10</thread-count>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </transactional-scheme>
              <distributed-scheme>
                   <scheme-name>storage.distributedcache.stt.scheme</scheme-name>
                   <service-name>storage.distributedcache.stt</service-name>
                   <serializer>
                        <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
                        <init-params>
                             <init-param>
                                  <param-type>String</param-type>
                                  <param-value>cnt-pof-config.xml</param-value>
                             </init-param>
                        </init-params>
                   </serializer>
                   <backing-map-scheme>
                        <local-scheme>
                             <high-units>250M</high-units>
                             <unit-calculator>binary</unit-calculator>
                        </local-scheme>
                   </backing-map-scheme>
                   <autostart>true</autostart>
              </distributed-scheme>
         </caching-schemes>
    </cache-config>
    Failing code (uses transaction APIs 3.6.0):
         public static void main(String[] args)
              Connection con = new DefaultConnectionFactory().createConnection("storage.transactionalcache.cnt");
              con.setAutoCommit(false);
              try
                   OptimisticNamedCache cache = con.getNamedCache("cnt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.insert(tID, tC);
                   con.commit();
              catch (Exception e)
                   e.printStackTrace();
                   con.rollback();
              finally
                   con.close();
    Code that succeeds (but without transaction APIs):
         public static void main(String[] args)
              try
                   NamedCache cache = CacheFactory.getCache("stt.t1");
                   CId tID = new CId();
                   tID.setId(11111L);
                   C tC = new C();
                   tC.setVal(new BigDecimal("100.1"));
                   cache.put(tID, tC);
              catch (Exception e)
                   e.printStackTrace();
              finally
    And here is what we list using coherence console if we use transactional APIs:
    Map (cnt.t1): list
    CId {
    id = 11111
    } = null
    Any suggestion, please?

    Cristian,
    After looking at your configuration I noticed that your configuration is incorrect. For a transactional scheme you cannot specify a backing-map-scheme.
    Your config contained:
    <backing-map-scheme>
    <local-scheme>
    <high-units>250M</high-units>
    <unit-calculator>binary</unit-calculator>
    </local-scheme>
    </backing-map-scheme>To specify high-units for a transactional scheme, simply provide a high-units element directly under the transactional-scheme element.
    <transactional-scheme>
        <scheme-name>small-high-units</scheme-name>
        <service-name>TestTxnService</service-name>
        <autostart>true</autostart>
        <high-units>1M</high-units>
    </transactional-scheme>http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/api_transactionslocks.htm#BEIBACHA
    The reason that it is not allowable to specify a backing-map-scheme for a transactional scheme is that transactional caches use their own storage.
    I am not sure why this would work with primitives and only fail with POF. We will look into this further here and try to reproduce.
    Can you please change your configuration with the above changes and let us know your results.
    Thanks,
    John
    Edited by: jspeidel on Sep 16, 2010 10:44 AM

  • "Lightroom encountered an error when reading from its preview cache and needs to quit. How do I fix this

    Recently updated to Windows8.1 got the message:  "Lightroom encountered an error when reading from its preview cache and needs to quit.  Lightroom will attempt to fix the error the next time it launches"  Get the same message on multiple launches.  Anyone know how to fix this?

    Re: "Lightroom encountered and error when reading from its preview cache and needs to quit"

  • Is there a way to select MULTIPLE tabs and then copy ALL of the the URLs and titles/or URLs+titles+HTML links? This can be done with the Multiple Tab Handler add on; However, I prefer to use a Firefox feature rather than download an add on. Thanks.

    Currently, I can copy ONE tab's url and nothing else (not its name). Or I can bookmark all tabs that are open. However, I'd like to have the ability to select multiple tabs and then copy ALL of the the URLs AND their titles/or copy ALL of the URLs+titles+HTML links? This can be done with the Multiple Tab Handler add on; when I download the add on, I get a message saying that using the add on will disable Firefox's tab features. I prefer to use Firefox features rather than download and use an add on. Is there a way to do this without an add on?

    Hi LRagsdale517,
    You should definitely be able to upload multiple files by Shift-clicking or Ctrl-clicking the files you want to upload. Just to make sure you don't have an old version of the service cached, please clear the browser cache and then log in to https://cloud.acrobat.com/files. After clicking the File Upload icon in the upper-right corner, you should be able so select multiple files for upload.
    Please let us know how it goes.
    Best,
    Sara

  • Problem with Expiry Period for Multiple Caches in One Configuration File

    I need to have a Cache System with multiple expiry periods, i.e. few records should exist for, lets say, 1 hour, some for 3 hours and others for 6 hours. To achieve it, I am trying to define multiple caches in the config file. Based on the data, I choose the Cache (with appropriate expiry period). Thats where, I am facing this problem. I am able to create the caches in the config file. They have different eviction policies i.e. for Cache1, it is 1 hour and for Cache2, it is 3 Hours. However, the data that is stored in Cache1 is not expired after 1 hour. It expires after the expiry period of other cache i.e.e Cache2.
    Plz correct me if I am not following the correct way of achieving the required. I am attaching the config file here.<br><br> <b> Attachment: </b><br>near-cache-config1.xml <br> (*To use this attachment you will need to rename 142.bin to near-cache-config1.xml after the download is complete.)

    Hi Rajneesh,
    In your cache mapping section, you have two wildcard mappings ("*"). These provide an ambiguous mapping for all cache names.
    Rather than doing this, you should have a cache mapping for each cache scheme that you are using -- in your case the 1-hour and 3-hour schemes.
    I would suggest removing one (or both) of the "*" mappings and adding entries along the lines of:
    <pre>
    <cache-mapping>
    <cache-name>near-1hr-*</cache-name>
    <scheme-name>default-near</scheme-name>
    </cache-mapping>
    <cache-mapping>
    <cache-name>near-3hr-*</cache-name>
    <scheme-name>default-away</scheme-name>
    </cache-mapping>
    </pre>
    With this scheme, any cache that starts with "near-1hr-" (e.g. "near-1hr-Cache1") will have 1-hour expiry. And any cache that starts with "near-3hr-" will have 3-hour expiry. Or, to map your cache schemes on a per-cache basis, in your case you may replace "near-1hr-*" and "near-3hr-*" with Cache1 and Cache2 (respectively).
    Jon Purdy
    Tangosol, Inc.

  • WAE 512 and transaction logs problem

    Hi guys,
    I have a WAE 512 with ACNS 5.5.1b7 and I'm not able to export archived logs correctly. I tried to configure the WAE as below:
    transaction-logs enable
    transaction-logs archive interval every-day at 23:00
    transaction-logs export enable
    transaction-logs export interval every-day at 23:30
    transaction-logs export ftp-server 10.253.8.125 cache **** .
    and the WAE exported only one file of about 9 MB even if the files was stored on the WAE as you can see from the output:
    Transaction log configuration:
    Logging is enabled.
    End user identity is visible.
    File markers are disabled.
    Archive interval: every-day at 23:00 local time
    Maximum size of archive file: 2000000 KB
    Log File format is squid.
    Windows domain is not logged with the authenticated username
    Exporting files to ftp servers is enabled.
    File compression is disabled.
    Export interval: every-day at 23:30 local time
    server type username directory
    10.253.8.125 ftp cache .
    HTTP Caching Proxy logging to remote syslog host is disabled.
    Remote syslog host is not configured.
    Facility is the default "*" which is "user".
    Log HTTP request authentication failures with auth server to remote syslog host.
    HTTP Caching Proxy Transaction Log File Info
    Working Log file - size : 96677381
    age: 44278
    Archive Log file - celog_213.175.3.19_20070420_210000.txt size: 125899771
    Archive Log file - celog_213.175.3.19_20070422_210000.txt size: 298115568
    Archive Log file - celog_213.175.3.19_20070421_210000.txt size: 111721404
    I made a test and I configured the archiveng every hour from 12:00 to 15:00 and the export at 15:10, the file trasnferred by the WAE was only three one of 12:00 the other of 13:00 and 14:00 the 15:00 has been missed.
    What can I do?
    Thx
    davide

    Hi Davide,
    You seem to be missing the path on the FTP server; which goes on the export command.
    Disable transaction logs, then remove the export command and then add it again like this: transaction-logs export ftp-server 10.253.8.125 cache **** / ; after that enable transaction logs again and test it.
    Let me know how it goes. Thanks!
    Jose Quesada.

  • Linking multiple tables and Trying to insert records into Detail

    Hello,
    I have been struggling with this one for years...
    Work Order, Employee Labor, and Materials.  I then create a group headers using the Location field from the Work Order table.  I then create another Group, suppress the group name, and insert several fields from the Work Order table (work order number, wo description, status and completion date into Group 2 section.  Multiple rows of info are displayed.  I then enter some Employee Labor fields from the Labor table into the Details section (employee name, labor hours, pay rate, etc.).  I get several lines of employee labor transaction information grouped below each row of work order info from the Group 2 section.  So far, so good.
    Now I attempt to bring a field into the Details section from the 3rd table (Materials).  The moment I introduce the record, the rows of employee labor are duplicated over and over.  In fact, there are multiple duplicate labor transaction rows (over and over).
    What am I doing wrong???  I've tried every combination of Linking Order that I can think of...  I need help.
    Please reply.
    ps.  Let me know if I need to attach a screen shot.
    Thank you
    Robert

    hi Robert,
    this is a common issue when you add several details tables to a report. once you add the fields, then the sql generated will include those parent tables and you get a multiplier of records.
    my recommendation to you would be to use a subreport for the materials table instead.
    go to the Insert menu, choose Subreport and create a new subreport using the Wizard then add your current connection but this time only add the materials table. in the Linking tab, choose your Work order and then add the subreport to the Work order group header. now in the subreport add some fields from your materials table onto the subreport canvas.
    what this is essentially doing is a subquery to the database and returning all associated materials records based on a filter for the work order.
    the main report should not include any materials records and you can also remove this table from the database expert for the main report.
    i hope this helps,
    jamie

  • After Effects CS6: Cache and Reload Footage Problem

    After Effects CS6: Cache and Reload Footage Problem
    I am having trouble getting my footage to update in an AE project. 
    In an existing project, there is an image sequence in a composition.  I have saved and closed AE, modified the footage (same dimensions and qty. of frames, only the pixel data changes), and then re-launched AE and opened the project.  Though I have changed the footage, the old content still appears.
    To resolve this, I have done the following:
    Edit>Purge>All Memory
    Edit>Purge>Image Cache Memory
    Edit>Preferences>Media & Disk Cache>Empty Disk Cache
    Edit>Preferences>Media & Disk Cache>Clean Database & Cache
    From the Project window I've selected all footage and selected Reload
    None of these got the footage to update.  I could double-click on the footage in the project bin and scrub the play head and the old footage was still displayed.
    Finally I had to replace the footage with itself and that resolved the issue.
    If this is operator error, lack of understanding the cache functions, or something else, I'd appreciate some assistance. I don't recall this behavior in CS 5.5, so I am guessing this is a change in CS 6.
    Win 7 Pro 64bit Service Pack 1
    Intel Xeon CPU X5450 @ 3.00 GHz 2.99 GHz
    RAM 20 GB

    about Illustrator and Photoshop files in CS6 especially - you should work with Ctrl/Cmd+E (Edit Original).
    when in Ae and you want to change a source file, use this command on one of the layers
    from the source file you wish to change (doesn't matter if its in the timeline or project window) and
    your original software linked to that file will open.
    when you work this way you make AE reload that file and any change after that will be seen back in AE immediately.
    it is also very convenient way to work because there can be no confusion on which file you are on
    if you work with multiple source files for example.
    even if you made the change with your Photoshop/Illustrator opened file and got back to Ae
    and didn't see any change - press on one of the layers Ctrl/Cmd+E and this will refresh it immediately.
    This is the case in CS6.
    in CC I have noticed that any change I make is refreshed immediately in Ae as if it rechecks the changes all the time,
    and didn't even need to use Ctrl/Cmd+E but it's a good habit all the same.
    didn't come across a case where that didn't work for be but
    if all fails you could always change the folder location of the file, let Ae lose it, then replace footage of the file,
    and thus let Ae re-link it back.

  • LIve cache and why BW in apo

    helo  sap masters
    could u plz help me out ..
    1. Why should we maintain LIVE Cache separately despite of having in APO tool.???
    2. Why should we have BW in APO too in spite of having BW tool seperately ??

    Hi Dallyanusha,
    1) Livecache is a special inbuild device in APO which does
    speedy operations especially during optimisation and retrives
    data quickly from database.   For effective functioning of
    planning activities, livecache in APO is must.
    2) BW in APO serves and stores information on business
    master data and transactional data objects and can be
    extracted to create various queries and reports.   This is
    more useful during demand planning
    Regards
    R. Senthil Mareeswaran.

Maybe you are looking for

  • Firefox is loading another web page instead of what I'm selecting

    When using a search engine (google.com) and clicking on a search result, I'm being re-routed to another website. == This happened == Every time Firefox opened == About 2 weeks ago

  • Playbook never progresses from initial start-up screen

    screen locked up with a black display. managed to "restart" but startup did not progress beyond "Blackberry Playbook screen; coloured bubbles kept changing colour & moving round but no progrssion from there. Waited 10+ minutes, no change. Pressed & h

  • CSV-Delimiter in SQL Developer Version 1.5.1

    I have a problem with data export as CSV format. I want to use a semicolon instead of a comma as value separator and with version 1.2.1 this worked as expected by configuration of the paramter Preferences/Database/General Export Parameter/Column Deli

  • Release a patch from the concert in which it was generated.

    Hello, I have a question .When I create a new patch this is a perfect copy of that concert. But if I try to do something as clear a strip, change its position, copy it or else I see that I can not do anything. Is there a way to make independent a new

  • Browser drags with ads. pdf link fails.

    web sites are filled with adds, mouse has little or very delayed response, clicking on link to pdf file (as in a statement) fails with error message indicating web issue, but all web sites have same problem.  Either there is a virus infection or some