Replicated Cache Data Visibility

Hello
When is the cache entry visible when doing a put(..) to a Replicated cache?
I've been reading the documentation for the Replicated Cache Topology, and it's not clear when data that has been put(..) is visible to other nodes.
For example, if I have a cluster comprised of 26 nodes (A through Z), and I invoke replicatedCache.put("foo", "bar") from member A, at what point is the Map.Entry("foo", "bar") present and queryable on member B? Is it as soon as it has been put into the local storage on B? Or is it only just before the call to put(..) on member A returns successfully? While the put(..) from member A is "in flight", is it possible to have to simultaneous reads on members F and Y return different results because the put(..) hasn't yet been invoked successfully on one of the nodes?
Regards
Pete

Hi Pete,
As the data replication is done asynchronously, (you may refer to this post, Re: Performance of replicated cache vs. distributed cache ) . So, you may read a different result on different nodes.
Furthermore, may I know your use case on replicated cache?
Regards,
Rock
Oracle ACS

Similar Messages

  • Are put()'s to a replicated cache atomic?

    We are using Coherence for a storage application and we're observing what may be a Coherence issue with latency on put()'s to a replicated cache.
    We have two software nodes on different servers sharing information in the cache and I need to know if I can count on a put() being atomic with respect to all software and hardware nodes in the grid.
    Does anyone know if these operations are guaranteed to be atomic on replicated caches, and that an entry will be visible to ALL nodes when the put() returns?
    Thanks,
    Stacy Maydew

    You could use explicit locking, for example,
    if (cache.lock(key, timeout)) {
        try {
            Object value = cache.get(key);
            cache.put(key, value);
        } finally {
            cache.unlock(key);
    } else {
        // decide what to do if you cannot obtain a lock
    }Note that when using explicit locking, you will require multiple trips to the cache server: to lock the entry,
    to retrieve the value, to update it and to unlock it. Which increases the latency.
    You can also use an entry processor which carries the information needed for the update.
    An entry processor eliminates the need for explicit concurrency control.
    An example
    public class KlantNaamEntryProcessor extends AbstractProcessor implements Serializable {
        public KlantNaamEntryProcessor() {
        public Object process(InvocableMap.Entry entry) {
            Klant klant = (Klant) entry.getValue();
            klant.setNaam(klant.getNaam().toLowerCase());
            entry.setValue(klant);
            return klant;
    and it's usage
    cache.invokeAll(filter, new KlantNaamEntryProcessor()).entrySet().iterator();Better examples can be found here: http://wiki.tangosol.com/display/COH33UG/Transactions,+Locks+and+Concurrency

  • Replicated scheme - data format

    Hi all
         I was doing memory size experiment on coherence. I was wondering if someone has answer to the following case:
         I am doing this investigation using Replicated cache scheme.
         I want to load 3 million id/value pairs, into a named cache (key:id, data:value)
         I can put these 3 million entries by calling put(key,data), or putAll(collection). At the end, the size of the application is around 900MB.
         I can put these 3 million entries by putting these ids into smaller hashtable (~100K items) and put these hashtables into named cache. put(somekey, hashtable). So, if I need to access an id, i find the respective hashmap, and check the value from that hashmap. The total size of the application is ~150MB for this case.
         I know there is some redundant data that goes into each items in the namedcache. For the first case, named cache has 3 million entries, for the second case it has 30 entries (100K hashmaps).
         The difference is 750MB, some of which is due to ~3million more indexes for the first case and some due to extra overhead for each items in the named cache.
         The main reason should not be the more index, i would expect that to be ~100MB. Is that the case that memory size is a lot higher since..
         3 million x EXTRA >> 30 x EXTRA
         If so, can someone tell me what this extra part consists of, when we put some data into named cache when we are using Replicated-scheme.

    There are a few things to consider:
         1) The data is stored in deserialized form until it is read. The first read will deserialize the data.
         2) There is some overhead per-entry. 750MB/(3,000,000-30) = ~250 bytes per entry. Some portion of this is attributable to the per-entry overhead.
         3) How did you measure memory usage? Usually invoking <tt>System.gc()</tt> before measuring heap usage will clear garbage but is not guaranteed.
         Jon Purdy
         Oracle

  • Query regarding Replicated Caches that Persist on Disk

    I need to build a simple fault tolerant system that will replicate cache
    entries across a small set of systems. I want the cache to be persistent
    even if all cluster members are brought down.
    Is this something that requires writing a custom CacheStore implementation
    or is there a configuration that I can use that uses off-the-shelf pluggable
    caches? The documentation was somewhat vague about this.
    If I need or want to write my own CacheStore, when there is a cache write-through
    operation how does Coherence figure out which member of the cluster will do
    the actual work and persist a particular object?

    Hi rhanckel,
    write-through and cache stores are not supported with replicated caches, you need to use partitioned (distributed) cache for cache stores.
    You can use a number of out-of-the-box cache stores (Hibernate, TopLink, etc) or you can write your own if you don't find a suitable one. Configuration is the same, you need to specify the cache store class name appropriately in the <cache-store-scheme> child element of the <read-write-backing-map> element.
    You can look at the documentation for it on the following urls:
    http://wiki.tangosol.com/display/COH34UG/cachestore-scheme
    http://wiki.tangosol.com/display/COH34UG/read-write-backing-map-scheme
    http://wiki.tangosol.com/display/COH34UG/Read-Through%2C+Write-Through%2C+Write-Behind+Caching+and+Refresh-Ahead
    As for how Coherence figures out which member needs to write:
    in a partitioned cache, each cache key has an owner node which is algorithmically determined from the key itself and the distribution of partitions among nodes (these two do not depend on the actual data in the cache). More specifically, any key is always mapped to the same partition (provided you did not change the partition-count or the partition affinity-related settings, although if you did the latter, then it is arguably not the same key anymore). Therefore Coherence just needs to know, who owns a certain partition. Hence, the owner of that partition is the owner of the key, and that node is tasked with every operation related to that key.
    Best regards,
    Robert

  • Caching Data in JSP

              I have several drop down list on my JSP page and their values are retrieved from
              the database everytime the page is requested. I find this to be inefficient. So
              I would like to implement a cache for these value. I'd like to know how can I
              implement a cache for this drop down list? How would I be able to know if there
              were changes made on the database?
              

    I have several drop down list on my JSP page and their values are          retrieved from
              > the database everytime the page is requested. I find this to be
              inefficient. So
              > I would like to implement a cache for these value. I'd like to know how
              can I
              > implement a cache for this drop down list? How would I be able to know if
              there
              > were changes made on the database?
              There are WL caching tags for JSPs ... see the 7.0 documentation.
              For caching in a cluster and keeping things in sync among all servers, use
              Coherence:
              http://www.tangosol.com/coherence.jsp
              If the data in the database can change from other non-Java applications, you
              have to either:
              1) turn off caching
              2) cache for relatively short periods of time (auto-expiring caches)
              3) put a hook into the db to update the cache(s) in Java (hard to do)
              Peace,
              Cameron Purdy
              Tangosol, Inc.
              http://www.tangosol.com/coherence.jsp
              Tangosol Coherence: Clustered Replicated Cache for Weblogic
              "Jerson Chua" <[email protected]> wrote in message
              news:3df90864$[email protected]..
              >
              

  • Basic use of locks with replicated cache

    Hi,
    I'm in the process of evaluating Coherence and I have a few fundamental questions about best locking practices with a simple replicated cache. I've been through the docs & forums a few times, but I'm still confused.
    The docs say that a replicted cache is "fully coherent". What, exactly does this mean? Does this imply that Coherence handles all the locking for you? Under what situations do I have to lock a node?
    Thanks.

    Hi,
    There are generally two reasons for desiring full synchronicity:
    Fault tolerance ... however, note that by the time the put returns, the data is held by both the client and the issuer. Also, we recommend the use of the Distributed cache topology for transactional data (and not Replicated). Distributed is in fact fully synchronous.
    The other reasons people usually ask about this is concern over the logical view of data updates (simultaneous state across the cluster). Even within a single JVM running on a single CPU, data updates are not simultaneous or even synchronous. This is quite intentional (in fact most of the complexity of the Java Memory Model arises from the desire to avoid the requirement for simultaneous state changes).
    In the JVM, synchronized blocks are required to avoid race conditions and inconsistent views of data, and in Coherence, locks are required to do the same.
    The key point to remember is, from the point of view of a single cluster member, data updates are in fact synchronous and simultaneous (with or without locking).
    I'm assuming your question relates to one of those two issues (as those are the most common concerns that we hear); if not, could you provide a bit more background on your requirements?
    Jon Purdy
    Tangosol, Inc.

  • Read through Scheme in Replicated Cache with berkely db

    Hi i have a 20 gb of data, while restarting the server i need to populate all the data in to coherence cache.if i create a pre load java class hadrly it will take 30 mins to 1 hour to load the data in to cache, while loading time if any request came means how can give the response to the request. i have gone through the Read through scheme its good. But i dont know how to implement with replicated cahce. is it possible to implement the read through+replicated cache+berkeley. if yes menas please post the sample code with full reference. thanks in advance
    Edited by: 875786 on Dec 5, 2011 8:10 PM

    If you read the documentation for the replicated scheme configuration here: http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE ans specifically the part about the <backing-map> configutation you will see...
    http://docs.oracle.com/cd/E24290_01/coh.371/e22837/appendix_cacheconfig.htm#BABGHEJE:
    To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching.So it would appear that you cannot do read-through with a replicated cache - which makes sense really when you think about it.
    As I already asked - why are you trying to put 20GB in a replicated cache?
    Presumably you do not have JVMs with heaps well over 20GB to hold all that data, or do you - in which case you must have tuned GC very well. You say you are bothered about NFRs and network latency yet you are building a system that will require either very big heaps, and hence at some point there will be long GCs or you cannot hold all the data in memory so you have to configure expiry and then you have the latency of reading the data from the DB.
    If you are using read-through then presumably your use-case does not require all the data to be in the cache - i.e. all you data access is using a get by key and you do not do any filter queries. If this is the case then use a distributed cache where you can store all the data or use read-through. If all you access is using key based gets then you do not need to co-locate the caches and your application in a single JVM - have separate cache server JVMs to hold the data and configure near-caches in your application.
    There are various ways to hold 20GB of data that would be much more efficient than you are suggesting with a replicated cache.
    JK

  • Replicated cache with cache store configuration

    Hi All,
    I am having two different applications. One is Admin kind of module from where data will be insterted/updated and another application will read data from Coherence cache.
    My requirement is to use Replicated cache and data also needs to be stored in the database. I am configuring cache with cache store DB operation.
    I have following coherence configuration. It works fine. Other application is able to read updated data. But while second application is trying to join the first application coherence cluster I am getting following exception in cache store. If I use distributed cache the same cache store works fine without any issues.
    Also note that eventhough it is throwing exception, application is working fine as expected. Other thing I am pre loading data on application start up in first application.
    Let me know if you need any further information.
    Thanks in advance.
    coherence-cache-config.xml
    <?xml version="1.0"?>
    <!DOCTYPE cache-config SYSTEM "cache-config.dtd">
    <cache-config>
    <caching-scheme-mapping>
    <cache-mapping>
    <cache-mapping>
    <cache-name>TestCache</cache-name>
    <scheme-name>TestCacheDB</scheme-name>
    </cache-mapping>
    </caching-scheme-mapping>
    <caching-schemes>
    <replicated-scheme>
    <scheme-name>TestCacheDB</scheme-name>
    <service-name>ReplicatedCache</service-name>
    <backing-map-scheme>
    <local-scheme>
    <scheme-name>TestDBLocal</scheme-name>
    <cachestore-scheme>
    <class-scheme>
    <class-name>test.TestCacheStore</class-name>
    <init-params>
    <init-param>
    <param-type>java.lang.String</param-type>
    <param-value>TEST_SUPPORT</param-value>
    </init-param>
    </init-params>
    </class-scheme>
    </cachestore-scheme>
    </local-scheme>
    </backing-map-scheme>
    <listener/>
    <autostart>true</autostart>
    </replicated-scheme>
    <!--
    Proxy Service scheme that allows remote clients to connect to the
    cluster over TCP/IP.
    -->
    <proxy-scheme>
    <scheme-name>proxy</scheme-name>
    <service-name>ProxyService</service-name>
    <thread-count system-property="tangosol.coherence.extend.threads">10</thread-count>
    <acceptor-config>
    <tcp-acceptor>
    <local-address>
    <address system-property="tangosol.coherence.extend.address">localhost</address>
    <port system-property="tangosol.coherence.extend.port">7001</port>
    <reusable>true</reusable>
    </local-address>
    </tcp-acceptor>
    <serializer>
    <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name>
    <init-params>
    <init-param>
    <param-type>String</param-type>
    <param-value>pof-config.xml</param-value>
    </init-param>
    </init-params>
    </serializer>
    </acceptor-config>
    <autostart system-property="tangosol.coherence.extend.enabled">false</autostart> </proxy-scheme>
    </caching-schemes>
    </cache-config>
    Exception:
    2010-08-31 10:46:09.062/171.859 Oracle Coherence GE 3.5.2/463 <Error> (thread=ReplicatedCache, member=2): java.lang.Clas
    sCastException: com.tangosol.util.Binary cannot be cast to test.TestBean
    at test.TestCacheStore.store(TestCacheStore.java:137)
    at com.tangosol.net.cache.LocalCache$InternalListener.onModify(LocalCache.java:637)
    at com.tangosol.net.cache.LocalCache$InternalListener.entryInserted(LocalCache.java:599)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:206)
    at com.tangosol.util.MapEvent.dispatch(MapEvent.java:166)
    at com.tangosol.util.MapListenerSupport.fireEvent(MapListenerSupport.java:556)
    at com.tangosol.net.cache.OldCache.dispatchEvent(OldCache.java:1916)
    at com.tangosol.net.cache.OldCache$Entry.onAdd(OldCache.java:1985)
    at com.tangosol.util.SafeHashMap.put(SafeHashMap.java:244)
    at com.tangosol.net.cache.OldCache.put(OldCache.java:266)
    at com.tangosol.coherence.component.util.CacheHandler.onLeaseUpdate(CacheHandler.CDB:42)
    at com.tangosol.coherence.component.util.CacheHandler.populateCache(CacheHandler.CDB:33)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache$CacheUpdate.onReceiv
    ed(ReplicatedCache.CDB:5)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onMessage(Grid.CDB:9)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:136)
    at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.ReplicatedCache.onNotify(ReplicatedC
    ache.CDB:3)
    at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
    at java.lang.Thread.run(Thread.java:619)
    2010-08-31 10:46:09.203/216.735 Oracle Coherence GE 3.5.2/463 <D5> (thread=DistributedCache, member=2): Deferring the di
    stribution due to 128 pending configuration updates
    TestBean.java
    public class TestBean implements PortableObject, Serializable {
         private static final long serialVersionUID = 1L;
         private String name;
         private String number;
         private String taskType;
         public String getName() {
              return name;
         public void setName(String name) {
              this.name = name;
         public String getNumber() {
              return productId;
         public void setNumber(String number) {
              this.number= number;
         public String getTaskType() {
              return taskType;
         public void setTaskType(String taskType) {
              this.taskType = taskType;
         @Override
         public void readExternal(PofReader reader) throws IOException {
              name = reader.readString(0);
              number = reader.readString(1);
              taskType = reader.readString(2);
         @Override
         public void writeExternal(PofWriter writer) throws IOException {
              writer.writeString(0, name);
              writer.writeString(1, number);
              writer.writeString(2, taskType);
    TestCacheStore.java
    public class TestCacheStore extends Base implements CacheStore {
         @Override
         public void store(Object oKey, Object oValue) {
         if(logger.isInfoEnabled())
              logger.info("store :"+oKey);
         TestBean testBean = (TestBean)oValue; //Giving classcast exception here
    //Doing some processing here over testBean
              ConnectionFactory connectionFactory = ConnectionFactory.getInstance();
              //Get the Connection
              Connection con = connectionFactory.getConnection();
              if(con != null){
    //Code to insert into the database
              }else{
                   logger.error("Connection is NULL");
    Edited by: user8279242 on Aug 30, 2010 11:44 PM

    Hello,
    The problem is that replicated caches are not supported with read write backing maps.
    Please refer to the link below for more information.
    http://download.oracle.com/docs/cd/E15357_01/coh.360/e15723/cache_rtwtwbra.htm#CFHEJHCI
    Best regards,
    -Dave

  • Using CacheLoader for replicated cache in cohernce 3.6

    Hi,
    I s it possible to configure a CacheLoader for replicated Cache in Coherence 3.6? The backing map will be local scheme for this cache.
    Regards,
    CodeSlave

    We have a "start of day" process that just runs up a Java client (full cluster member, but storage disabled node) that clears and then repopulates a number of "reference data" replicated caches we use in our application. Use the hints-n-tips in the Coherence Developer's Guide (bulk operations, etc.) to get decent performance. We load the data from an Oracle database. Again, tune the extract side (JDBC/JPA batching, etc.) to get that side of things performing well.
    For ad-hoc, intra day updates to the replicated caches (and you should look to minimise these), we use a "listener" that attaches to an Oracle database DCN (data change notification) stream.
    Cheers,
    Steve

  • Replacing our Replicated Caches with Distributed+CQC

    Hi,
    I've been advised on this forum to replace our Replicated caches with Distributed+CQC with an AlwaysFilter. This should give me the "zero-latency" get() performance which really should be part of a Replicated Cache, but apparently isn't (the lease-model and serialization get in the way of this).
    My concern is now storage efficiency - my understanding is that my storage footprint will double, as the same information is now stored in two places (partitioned backing map and cqc front map). Is this correct? If so, I'm unsure why it would be considered a good replacement for the Replicated scheme.
    Thanks,
    Matt

    The second link looks like it helped you out
    - Re: The effects of "leases" on the read-performance of Replicated Caches
    Also if you are looking at (near) zero latency when garbage collections occur, you might want to have at the Zing JVM
    - http://www.azulsystems.com/products/zing/virtual-machine
    Layency due to garbage collection (i.e., introduced pause times) could break your (extreme) low-latency demand.
    You could try the CQC with the always filter:
    NamedCache cache = CacheFactory.getCache("somecache");
    ContinuousQueryCache localCache =new ContinousQueryCache(cache, AlwaysFilter.INSTANCE);The preceding code will result in a locally materialized view of the cache data that satisfies the specified filter. By default, both keys and values will be cached locally.
    If you want to cache only keys and retrieve values from the back cache as needed, which might be the best option if the values are large and accessed infrequently, or
    if you only care about having an up-to-date keyset locally, you can pass false as the third argument to the CQC constructor.
    To get data from the CQC you can use
    Iterator<Map.Entry<Integer, Klant>> data = localCache.entrySet().iterator();

  • One concern for replicated cache

    For replicated caches, I think changes are replicated asynchronously,then how to understand update operations will achieve "bad" performance when many nodes exist?Could you kindly pls. explain expenses costed for that? Thanks!
    BR
    michael

    user8024986 wrote:
    Hi Robert,
    That listens reasonable, unicast and multi-cast messages are sent out without having to wait for responses and multi-cast reduces the network's stream size at the same time.
    Then my concern is still that, for replication cache,any changes are sent as messages to other nodes in asyn mode(needn't wait for the response either unicast and muli-cast), so the cost are mainly caused by sending of changes ,which is decided by network status,if the network is high enough and since the messages are sent in asyn way, the performance will be affected finitely, right?
    thanks,
    MichaelMichael,
    it may not have been clear, but what Aleks said is still true. The interleaving means that messages are sent out to recipient nodes interleaved without waiting for a response before sending to the next node, but the cache.put() call returns after the the positive response from all cache nodes arrived that the update was incorporated into their own copy of the data (or the death of the recipient node was detected in which case the response will not be waited for).
    So the overall cost on the network is both sends and responses, and since in general responses go to a single node (the sender of the message the response replies to) therefore even for a multicast update there will be several unicast responses.
    But yes, the higher number of cluster nodes there are, the larger load this puts on the network.
    There are several measures in Coherence for trying to decrease this effect on the network, e.g. bundling together messages or ACKs to the same destination which allows them to be sent in less packets than if they were sent alone (this is particularly effective in case of small messages and ACKs), but this is really effective when there are many threads on each node each doing cache operations as this increases the likelihood of multiple messages/ACKs to be sent to the same node roughly at the same time.
    But in general, if you have frequent writes to a replicated cache you can't really scale it after a point (a certain number of cluster nodes) due to saturating the network, and you should consider switching to partitioned caches (distributed cache). Even near and continuous query caches are not really effective in case of write-heavy caches (more writes than reads).
    Even if the network is able to keep up, more messages would still increase the length of the queues of messages in a node to respond to and therefore more messages would probably mean longer response times.
    Best regards,
    Robert

  • Single Sign-On and Data Visibility Rights

    Hello,
    I was wondering whether anyone has any best practices for implementing single sign on and user identification with Excelsius.
    More specifically, I need to interrogate user role, and limit certain data visibility based on that role.
    For example, a sales rep may only see certain data for their own territories, but the regional and national managers can see more.
    With the emphasis in improving enterprise integration with the new version coming up, I'm also wondering if there are any improvements included for this aspect.
    Thanks in advance.
    Derick

    Hi Derick,
    I want to make our discussion into 2 parts
    1) Sign on
    2) Viewing data based on the Heirarchy
    1)Before discussing about the Sign on i want to know which connectivity you are using ? Live offcie or QaaWS.
    2) We can make the second point possible in two ways One is with providing restriction at universe level
    and the other one is through the use of flash variables.
    Using flash variables:
    The main idea of using flash variables is reading the User ID from BO authentication and based on that we fetch the Heirarchy level of that user. Then we use some excel logic to hide the data from Low level heirarchy(Here we use Dynamic Visibility for components).
    I hope this is what you ar looking for....
    If so i have more points to acheive such scenario.
    Please provide the your BO environment details, such that it will be easy to identify the better best wat to acheve it.
    Regards,
    AnjaniKumar C.A.

  • Some music files do not show up in google play music app library.  I did clear cache/data and restarted phone.  The music is stored on the SD card.  Most of the music in the library is in the same folder on the sd card.  I can play the song from file mana

    some music files do not show up in google play music app library.  I did clear cache/data and restarted phone.  The music is stored on the SD card.  Most of the music in the library is in the same folder on the sd card.  I can play the song from file manager, but it still is not in the music library in play music.

    Cyndi6858, help is here! We'd be happy to help figure this out. Just to be sure though, the Droid Maxx should not have an SD card. Is this the Droid Razr Maxx? How did you add the music to the device? Are you able to see the files and folders located on the SD card or device when plugged in?
    Thanks,
    MichelleH_VZW
    Follow us on Twitter @VZWSupport

  • Report Using A Stored Procedure Is Caching Data

    Post Author: springerc69
    CA Forum: Data Connectivity and SQL
    I converted a report from a view that worked fine to a stored procedure to try and improve the performance of the report, but when I publish the report it seems to cache the data.  When you change the parameters you use to call the report or simply run the report again with the original parameters the report doesn't run the sproc and just shows the cached data from the original request.  If I right click on the report and select refresh (web based crystal report), it prompts for the parameters. I just close out the prompt window, report window and click on the link for the report again it returns the correct results based on the new parameters or a refresh based on the original parameters.  I've checked the cache time setting and set it to 0, and if you close the Internet Explorer window that originally called the report, open IE back up and request the report it will return the appropriate data.  I have also verify that the report is not setup to save data with report.  This is on Crystal XI Server.

    Post Author: synapsevampire
    CA Forum: Data Connectivity and SQL
    Which viewer are you using?
    It might be that your IE settings are caching the report pages. because you're using an HTML viewer.
    Try the Active-X viewer.
    I've forgotten which icon it is that changes the viewer...it's under the preferences options, I think it's the one that looks like a hunk of cheese on the right upper side.
    -k

  • Data visibility problem in PDF file generated from R/3 system

    Hi all,
    We are converting the output of a Smartform into a PDF format. From this smart from, we receive data in OTF format, after which the required data is converted from OTF format to PDF format. Then PDF data is sent through mail using FM SO_NEW_DOCUMENT_ATT_SEND_API1 as an attachment. 
    For the above mentioned Smartform, we have defined Paragraph & Character font type as 'TIMES' in smart style. But when we receive the PDF file (through mail), some data of appears in other fonts:
    1) When PDF file is generated from Development system, the different font types which are used by PDF are Arial and Times-Roman. In this case, we are able to see the PDF output and faced no major problems because both of these fonts are supported by PDF file format.
    2) When PDF file is generated from Testing system, the font types which is used by PDF is Arial. In this case, we are able to see the PDF output because this font is supported by PDF file format.
    3) When PDF file is generated from Production system, the different font types which are used by PDF are Arial, Times-Roman and PBRS. Here, we are able to see the data which is displayed in Arial font and Times-Roman font. But we are not able to see the data which is displayed in PBRS font because this type of font is not supported by PDF file format. If we copy the distorted text from PDF file and paste it in MS-Word then the required data is clearly visible and that is due to the reason that MS-Word supports PBRS font.
    But as per smart style we have used only 'TIMES' font, then why different fonts are appearing in PDF file?
    What settings are required to be done to make this data visible?
    Any pointer to solve this issue will be really appreciable. Thanks in advance for your help.
    Thanks and Regards,
    Dheeraj Tolani

    Hi,
    check the following
    http://help.sap.com/bp_biv235/BI_EN/html/bw.htm
    business content
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/com.sap.km.cm.docs/library/biw/g-i/how%20to%20co-pa%20extraction%203.0x
    https://websmp203.sap-ag.de/co
    http://help.sap.com/saphelp_nw04/helpdata/en/37/5fb13cd0500255e10000000a114084/frameset.htm
    (navigate with expand left nodes)
    also co-pa
    http://help.sap.com/saphelp_nw04/helpdata/en/53/c1143c26b8bc00e10000000a114084/frameset.htm
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/fb07ab90-0201-0010-c489-d527d39cc0c6
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/1910ab90-0201-0010-eea3-c4ac84080806
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/ff61152b-0301-0010-849f-839fec3771f3
    FI-CO 'Data Extraction -Line Item Level-FI-CO'
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/a7f2f294-0501-0010-11bb-80e0d67c3e4a
    FI-GL
    http://help.sap.com/saphelp_nw04/helpdata/en/c9/fe943b2bcbd11ee10000000a114084/frameset.htm
    http://help.sap.com/saphelp_470/helpdata/en/e1/8e51341a06084de10000009b38f83b/frameset.htm
    http://www.sapgenie.com/sapfunc/fi.htm
    Please reward for the same.

Maybe you are looking for

  • Scanning with HP

    Is there an HP scan utility / download that I am missing? Perhaps something else? 1) I installed OS X 10.6.4 via installation disk. 2) When I installed it, I chose NOT to install the printer drivers and NOT to install printer drivers for printers clo

  • Want to enhance wifi strength - router in one room, wifi capable computer in another

    I need to enhance the wifi signal in my home in order to utilize a wifi ready computer on another floor. The wifi indicates 2 out of 5 bars on the wifi indicator, but frequently loses the signal completely. I have several coaxial ports around the hou

  • Email addresses as parameters.

    Hi, I know this should really be in the SQL forum-but it is SQL in BIP, so I thought I would try it here first! I have a report with a parameter for an email address, the parameter is referenced within the sql statement as a variable (&p_email_addres

  • PL/SQL not working (error starting at line 1)

    Good day, I can't seem to get any PL/SQL commands to run in a SQL sheet. I am using SQL Developer with Oracle Express 11g and can run general SQL statements. Do I need to be running PL/SQL in a different form? I am copying the HR Schema Queries from

  • How do you import a movie for Ipod

    How can you put a DVD movie on the new IPOD?