SLF4J and Coherence ?

Hi,
I saw some posts regarding coherence and SLF4J. Does anyone know if this is supported or likely to be in the near future ?.
Thanks
Martin

Hi,
I saw some posts regarding coherence and SLF4J. Does anyone know if this is supported or likely to be in the near future ?.
Thanks
Martin

Similar Messages

  • Customized Cache Config and Coherence Override files not Overrided.

    Hi,
    I am doing an example on coherence and was wondering if anyone can help with a couple of questions.
    1. I am trying to override the coherence-cache-config.xml and tangosol-coherence-override.xml with my custom xmls. But when I start cache server from cache-server.cmd executable provided in bin folder of Coherence, it is not able to pick up the overrided files. How can I override both these files.
    2. In most of the examples a new executable file is made for cache server start and the custom coherence-cache-config.xml is specified in this. The examples runs perfectly fine but I'm not able to see the same cache elements available when I run cache-server and cache-client present in the bin folder of Coherence. How can I integrate both of them.
    3. When we use Coherence as a web service from server, are we suppose to use run cache-server.cmd for starting the cache server or make some custom file executable file made for cache server start.
    I know some of the questions may sound silly to you but I'm not still able to figure the answer to these question.
    Any help will be appreciated.
    Cheers,
    Varun

    Hi Varun,
    Yes you can use the property -Dtangosol.coherence.cacheconfig =<file_name> for specifying the override file. No creating a jar is not a bottleneck and you can use either jar or system property in Production Environment. It completely depends on how your environment is setup and is for the flexibility pruposes? Use whichever is handy ...
    Ideally, you WS application should run in a seperate JVM than your Coherence Nodes. In this scenario, your WS application is set as storage-disabled and your Coherence nodes will be set as storage-enabled and both WS and Coherence Nodes will be part of the Coherence cluster. There will be no syncing between WS application and Coherence Nodes if you are using distributed cache; all the data requests from the WS application will be served by the Coherence Nodes. But, if you are using Near Cache then the data syncing will depend on the invalidation strategy used by you in your cache configuration. Read more about near-cache scheme here: http://wiki.tangosol.com/display/COH35UG/near-scheme.
    In WLS 10.3.4, there is a integration of Coherence and you can create Coherence Cluster from the Admin Console and define various Coherence cluster nodes and the documentation is here (http://download.oracle.com/docs/cd/E17904_01/web.1111/e16517/coh_wls.htm). But if you are using any other version of Weblogic or any other container then you cannot start from Weblogic but you need to start them seperately. Also, you need to make sure that the Coherence Nodes with storage-enabled are started before the storage-disabled nodes.
    Hope this helps!
    Cheers,
    NJ

  • GoldenGate and Coherence

    Hi,
    There is recent requirement where customer has asked for GoldenGate integration with Oracle Coherence.
    Can anyone provide some pointers, I have browsed a lot but was didn't get anything substantial.
    Some documentation or something + I have also get this thing that above integration is expected in 2012, till now its not possible.
    Some of the stuff I browsed suggests that:
    1) Using DCN or AQ (along with some triggers , JMS listeners) to publish changes to coherence.
    2) Some posts suggested using GG Java API
    3) others suggested using GG JMS Adapter.
    which was is good or any other option.
    Thanks,
    Vikas

    The out-of-the-box GoldenGate integration with Coherence and TopLink to keep a Coherence cache in-sync with the database (for those times when the database is updated not through the cache) was a coordinated development effort amongst the GoldenGate, TopLink and Coherence products. This feature will work in the upcoming GoldenGate v11.2 "adapters" release, but I just found out myself that this will be shipped with Coherence, as part of the 12c release (but will work in any OGG 11.2 or later release).
    In the meantime, you've correctly enumerated your available options.

  • Transfer and coherence function with Labview.

    Please send me a Labview code with transfer function (gain vs frequency) and coherence function of 2 signals: stimulus signal (1024 pts) and response signal (1024 pts). Thanks.

    The Frequency Response Function (Mag-Phase).vi VI located in the Analyze>>Waveform Measurements palette returns the Coherence function. The attached example (LabVIEW 6.1) VI shows how it work (using a digital filter as unit-under-test).
    Note that the coherence is always equal to 1.0 (by definition) if you are not averaging.
    Attachments:
    Transfer_Function_and_Coherence.vi ‏78 KB

  • Wsrp and coherence web

    has anyone worked on the combination of WSRP and coherence ?is it possible?will it work..
    Thanks
    Edited by: goodboy626696 on Jul 27, 2009 1:17 PM

    Take a look at http://coherence.oracle.com/display/COH35UG/Using+Coherence+and+WebLogic+Portal.
    :Rob:
    Coherence Team

  • Looking for some advice on CEP HA and Coherence cache

    We are looking for some advice or recommendation on CEP architecture.
    We need to build a CEP application that conforms to the following:
    • HA with no loss of events or duplicate events when failing over to the backup server.
    • We have some aggregative rules that needs to see all events.
    • Events are XMLs with size of 3KB-50KB. Not all elements are needed for the rules but they are there for other systems that come after the CEP (the customer services).
    • The XML elements that the CEP needs are in varying depth in the XML.
    Running the EPN on a single thread is not fast enough for the required throughput mainly because network latency to the JMS and the heavy task of parsing of the XML. Because of that we are looking for a solution that will read the messages from the JMS in parallel (multi thread) but will keep the same order of events between the Primary and Secondary CEPs.
    One idea that came to our minds is to use Coherence cache in the following way:
    • On the CEP inbound use a distributed queue and not topic (at the CEP outbound it is still topic).
    • On the CEPs side use a Coherence cache that runs on the CEPs JVMs (since we already have a Coherence cluster for HA).
    • Both CEPs read from the queue using multi threading (10 reading threads – total of 20 threads) and putting it to the Coherence cache.
    • The Coherence cache is publishing the events to both CEPs on a single thread.
    The EPN looks something like this:
    JMS adapter (multi threaded) -> replicated cache on both CEPs -> event bean -> HA adapter -> channel -> processor -> ….
    Does this sounds sound to you?
    Are we over shooting here? Is there a simpler solution for our needs?
    Is there a best practice for such requirements?
    Thanks

    Hi,
    Just to make it clear:
    We do not parse the XML on the event bean after the Coherence. We do it on the JMS adapter on multiple threads in order to utilize all the server resources (CPUs) and then we put it in the replicated cache.
    The requirements from our application are:
    - There is an aggregative query that needs to "see" all events (this means that we need to pass all events thru a single processor and we cannot partition them to several processors).
    - Because this is a HA solution the events on both CEPs (primary and secondary) needs to be at the same order when reaching the HA inbound adapter and the processor.
    - A single thread JMS adapter is not reading the messages from the JMS fast enough mainly because it takes time to parse the XML to an event.
    - Using a multi-threaded adapter or many single threaded adapters with message selector will create a situation that the order of events on both CEPs will not be the same at the processor inbound.
    This is why we needed a mediator so we can read in multiple threads that will parse the XMLs in parallel without concerning on order of messages and on the other hand publish all the messages on a single thread to the processors on both CEPs from this shared mediator (we use a replicated cache that runs on both JVMs).
    We use queue instead of topic because if we read the messages from a topic on both CEPs it will be stored twice on the Coherence replicated cache. But if we use a queue, when server 1 read the message and put it in the Coherence replicated cache then server 2 will not read it because it was removed from the queue.
    If I understand correctly you are suggesting replacing the JMS adapter with an event bean that will read the messages from the JMS directly?
    Are you also suggesting that we will not use a replicated cache but instead a stand alone cache on each server? In this case how do we keep the same order of events on both CEPs (on both caches)?

  • Packet Loss on Xen and Coherence

    We are currently experiencing packet loss issues with Coherence 3.4.2 during the datagram test.
    Packet loss statistics are as follows:
    Rx from publisher: /10.96.67.169:9999
    elapsed: 149384ms
    packet size: 1468
    throughput: 66 MB/sec
    46889 packets/sec
    received: 7004519 of 8252377
    missing: 1247858
    success rate: 0.848788
    out of order: 0
    avg offset: 0
    gaps: 535018
    avg gap size: 2
    avg gap time: 0ms
    avg ack time: -1.0E-6ms; acks 0
    The Coherence implementation is running on a Xen VM.
    We see this happen for both Fully Virtual and Paravirtualized Guests.
    This problem does not happen on physical hardware.
    Here is the general sequence that we tried:
    1. After finding the problem on coherence, we tried to simulate similar results on 2 HVM xen systems and we did not find the problem there.
            a. These boxes were HVM guests.
            b. Were running kernel 2.6.18-164 and redhat 5.4
            c. These guests were running on Dom-0 kernel of 2.6.18-164.2.1el5xen
    2. We had 2 para virt machines on the same Dom-0 as above but they were redhat 5.2 so we ran the same test there and still we were running into problem.
    3. We upgraded the para virt machines to redhat 5.4 with latest patch rev and still problem was present.
    4. after this research found out that we need to disabled module ipv6 and that seems to fix the problem. After disabling IPv6 module ran some more tests between pl1rap704-beta and pl1rap706-beta. Results were performance improved but still packet loss.
    5. We converted 2 para virt guests to HVM guests (pl1rap704-beta and pl1rap705-beta) and ran the tests it was still having problem.
    6. Upgrade pl1rap704-beta and pl1rap705-beta to redhat release 5.4 and latest kernel rev and see if the problem is still there
    We haven't tried this on Oracle VM, but think that would be the next step to see if the problem persists there, although Oracle support indicates that Coherence is not officially supported on Oracle VM.
    We still see the packet loss issues and wonder if anyone has encountered this issue before and has a solution to it?

    Just saw your post.
    If still a problem.....send a PM to   Heather_VZ  she  has been very helpful to many people on many subjects
    Tom
    Freedom Essentials, QIP 7100 1,Bose SOLO TV Sound System,,QIP 7216 P2,M1424WR Rev F, iPad 2 WiFi,iPhone 5,TV SYST INFO Release 1.9.5 Build No. 17.45
    Data Object 39.45

  • 11g and Coherence (pre-requisite?, included?)

    We are planning to work with SOA Suite 11g for a mid scale project.
    From the download's site it looks like Weblogic + Coherence "package" is a pre-requisite.
    However we see licencing issues as we did not purchase any license for Coherence.
    Is Coherence a Formal Pre-Requisite for SOA Suite 11g and/or OSB (service bus) 11g ?
    Some people on forums suggest Coherence is "included" in SOA Suite. Would this be the case from the licensing point of view (i.e. licence for SOA Suite / OSB gives us the right to use Coherence without paying separately ?
    Last not least, can we install SOA Suite with Weblogic only (no Coherence), even if the download page indicates we should have the Weblogic + Coherence package in place ?
    Many thanks,
    CL.

    Hi
    1. I am not much familiar with Coherence License stuff. As far as I know, Coherence is used for Caching Framework and is part of the Weblogic Server and it may not have anything to do with SOA Stuff. BUT incase if internally SOA files uses Coherence APIs for Caching, then you may need this component also installed.
    2. YES, you can install WLS without coherence. Even though your installer has WLS + Coherence, at the time of installation of WLS, choose the option like Custom (instead of default Typical). In the following screens, you can UnCheck the boxes for Coherence (and its samples). So you get only WLS installed. On top of this install SOA. Create a SOA Domain. Start admin and soa servers. If they start fine without any errors referring coherence APIs, then you are good to go.
    Thanks
    Ravi Jegga

  • J2EE 1.4 and coherence

    I heard that J2EE 1.4 standards require not to spawn threads and new WebSphere will not allow such. Wanted to now what is your plan to support this

    Nabil,
    I presume you are referring to the restrictions in the EJB specification (section 24.1.2), which are intended to accomplish several things, including ensuring portability and security. The restrictions do not apply to the containers, libraries and components that the EJB utilizes, for example the application server itself or a JDBC driver that communicates over a socket, nor do the restrictions apply to core classes of Java that you may utilize from the EJB, for example the java.lang.String class. Each of the listed rules in the specification has an associated purpose (explanation) attached to it to provide a context for the rule. For example:
    Rule: "An enterprise bean must not attempt to listen on a socket, accept connections on a socket, or
    use a socket for multicast."
    Explanation: "The EJB architecture allows an enterprise bean instance to be a network socket client, but it does not
    allow it to be a network server. Allowing the instance to become a network server would conflict with
    the basic function of the enterprise bean-- to serve the EJB clients."
    Rule: "The enterprise bean must not attempt to manage threads. The enterprise bean must not attempt
    to start, stop, suspend, or resume a thread; or to change a thread’s priority or name. The enterprise
    bean must not attempt to manage thread groups."
    Explanation: "These functions are reserved for the EJB Container. Allowing the enterprise bean to manage threads
    would decrease the Container’s ability to properly manage the runtime environment."
    Generally, what this means is that the EJB itself should not try to make assumptions about its environment (for portability) or work around limitations of the environment (security). The classes and libraries that an EJB uses by matter of course are not held to the same restrictions. As one example, java.lang.String violates at least three of the rules set forth for EJBs (including thread synchronization, native code use, and use of static fields), yet String is a legitimate class to use within an EJB. Similarly, JDBC drivers may be implemented using native code, and may communicate over sockets; Oracle’s JDBC drivers do both, for example.
    Since Coherence is supported on virtually all Java application servers and hardware/OS platforms, applications that use Coherence are further assured of their portability. Coherence does not take advantage of any vendor-specific libraries or features, and is completely server agnostic and built in pure Java. Coherence was carefully architected to use every resource sparingly and (as much as possible) asynchronously to facilitate the maximum scalable throughput.
    Coherence elegantly fills a gap in functionality that otherwise presents a nearly unsolvable puzzle -- how to safely and efficiently manage data in the application tier that may be 'expensive' to get from a persistent store, and particular for data that are used often, ensuring that the cluster will scale as well as possible.
    Regards,
    Gene

  • MapListener and Coherence and Java versions

    Hi,
    (1) On one of our testing environments, on linux, we are running coherence 3.7.1.7 on JVM Java 1.6u33 . We are using coherence*extend clients, and we do listen to updates in a cache through MapListeners. We have no issues on this configuration, and we consistently get onMapEvent calls for events.
    (2) When the same code is ported to another linux box, running 3.7.1.7 on JVM 1.6u06, we start to get onMapEvent calls for a while, but then it stops abruptly. We are able to verify the server cache getting updated properly, through a backing map listener on the server side. However the listeners on the coherence*extend are not sent notification.
    I understand that the (2) configuration is running JVM which is lower than the recommended version,but, When we downgraded JVM on (1) with 1.6u06, we still had consistent onMapEvent calls.
    My question is, does this kind of inconsistent behavior ring a bell to anybody over here ? It is hard to understand why the notifications would be inconsistent when the JVM is lower ... may be it is not associated with the JVM at all ? Can somebody throw some light on this please ?

    If it won't compile with VJ++, don't expect it to run with IE unless you've got a Sun plugin.
    As to writing parsers, what kind of grammar are you trying to parse?

  • ClassLoaders and Coherence

    Morning all,
    I've been playing with Coherence and Classloaders. I'm trying to start a Cluster in a single process of one Extend and 2 Storage nodes (basically for testing).
    I can get this to work : see CoherenceStarter code at the bottom, if I explicitly load Coherence in the URL Classloader.
    However, if I stick Coherence on the Classpath I suddenly get errors attached at the bottom - ie. Coherence is loading from the SystemClassPathLoader - not from the URL and hence they are colliding.
    So I've dug into the ClassLoader and, from my reading, I want a ParentLast (also called ChildFirst) classloader that is referred to by some of the AppServers.
    I've played with the findClass(String classname, boolean resolve) method, but I can't get this to work ( trying to force it to load everything beginning with "com." from the URL).
    Can anyone solve what should be a pretty simple problem.
    This would allow everyone to fire up a real cluster in their testing (and hence test all the Serialization / Comms / KeyAssoc etc. that local-only testing misses)
    and then run all their tests against it. At the moment I have a runner that loads the Test.jar / App.jar / Coherence.jar as URLs which is ugly.
    Note that the Builder/Factory stuff doesnt do what I want.
    Here's hoping! Andrew.
    public class CoherenceStarter {
    public static void main(String[] args) throws Exception {
    System.setProperty("tangosol.coherence.cacheconfig","extends.xml");
    System.setProperty("tangosol.coherence.clusterport", "1111");
    System.setProperty("tangosol.coherence.cluster", "anew1");
    System.setProperty("tangosol.coherence.log.level", "9");
    System.setProperty("tangosol.coherence.log", "stdout");
    System.setProperty("tangosol.coherence.distributed.localstorage","false");
    // Start an extend node.
    start();
    // Let the node startup.
    Thread.sleep(2000);
    System.setProperty("tangosol.coherence.distributed.localstorage","true");
    System.setProperty("tangosol.coherence.cacheconfig","server.xml");
    // Start a storage node
    start();
    Thread.sleep(2000);
    // Start another storage node.
    start();
    // Wait!
    Object lock = new Object();
    synchronized(lock) { lock.wait(); }
    private static void start() throws Exception {
    new URLClassLoader(new URL[] {new URL("file:/C:/awilson/dev/security/lib/coherence-3.5.1.p2.jar")},null).
    loadClass("com.tangosol.net.DefaultCacheServer").getMethod("startDaemon", null).invoke(null, null);
    2009-11-04 09:40:46.879/6.609 Oracle Coherence GE 3.5.1/461p2 <Error> (thread=Invocation:Management, member=3):
    java.io.IOException: Class initialization failed: java.lang.ClassCastException: com.tangosol.coherence.component.net.Member cannot be cast to com.tangosol.io.ExternalizableLite
         at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:1946)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2273)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2219)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2198)
         at com.tangosol.coherence.component.net.management.Connector$Announce.readExternal(Connector.CDB:4)
         at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:1969)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2273)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2219)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:60)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.read(InvocationService.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:123)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)
    Class: com.tangosol.coherence.component.net.Member
    ClassLoader: null
    ContextClassLoader: sun.misc.Launcher$AppClassLoader@df6ccd
         at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:1961)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2273)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2219)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2198)
         at com.tangosol.coherence.component.net.management.Connector$Announce.readExternal(Connector.CDB:4)
         at com.tangosol.util.ExternalizableHelper.readExternalizableLite(ExternalizableHelper.java:1969)
         at com.tangosol.util.ExternalizableHelper.readObjectInternal(ExternalizableHelper.java:2273)
         at com.tangosol.util.ExternalizableHelper.readObject(ExternalizableHelper.java:2219)
         at com.tangosol.io.DefaultSerializer.deserialize(DefaultSerializer.java:60)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.readObject(Service.CDB:4)
         at com.tangosol.coherence.component.net.Message.readObject(Message.CDB:1)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.grid.InvocationService$InvocationRequest.read(InvocationService.CDB:8)
         at com.tangosol.coherence.component.util.daemon.queueProcessor.service.Grid.onNotify(Grid.CDB:123)
         at com.tangosol.coherence.component.util.Daemon.run(Daemon.CDB:42)
         at java.lang.Thread.run(Thread.java:619)

    Hi Andrew,
    I don't think there is any way with the URL class loader to avoid these issues if you put the coherence.jar on the process classpath.
    You have to put the coherence.jar into each url classloader only.
    I know, communication becomes a bit problematic after this, as in this case you can't link your test harness code against Coherence. You can only use classes in the parent classloader to communicate with the child classloaders.
    A great help with this could be if you used a scripting language, e.g. Groovy to write your code which does operations on the Cluster nodes, passed the Groovy code as a String to the appropriate Cluster node (e.g. by a ConcurrentLinkedQueue which is consumed by a thread with the context class loader set to the cluster nodes classloader), and compiled that String in the cluster node. Any object you returned must also be composed of classes which reside in the parent classloader.
    Also remember to use different override files in the classpath of the child nodes and not in the parent node if you want to control things like ports, or description, or any settings which need to differ on the three nodes, as system properties are shared.
    Best regards,
    Robert

  • Oracle SOA Suite and coherence!!

    hi all,
    how can coherence be used in oracle soa suite??can coherence help in clustering two oracle application servers ??
    regards,
    karthik

    Not really.
    If you want to use coherence it would be around the services you call. You may access a service frequently to get some data. Coherence can be used to increase performance, avaliablity and scalability.
    here is the doc to implement clustering.
    http://download.oracle.com/docs/cd/E10291_01/core.1013/e10294/toc.htm
    cheers
    James

  • The panic protocol and Coherence 3.5

    All,
    We just upgraded from 3.3.1 to 3.5 but I'm having trouble forming a cluster in multi-server environments. Our config files were developed against older versions of Coherence and I had a lot of trouble with them at first, some of which is detailed here: Config file problem with new Coherence 3.5
    The problem now is that we have 2 standalone nodes and 2 application nodes (WebLogic) spread across 2 physical servers (1 standalone and 1 application on each box.) Previously (Coherence 3.3.1,) they all formed one happy cluster of 4 members. Now (Coherence 3.5,) they form separate clusters: each physical machine makes a cluster of 2 members. At startup, I can see the 2-node clusters form. Some time later (not immediately) I see the "unexpected cluster heartbeat" message warning about getting a heartbeat from the other physical server. Clearly the members of the different servers can communicate to some degree if they get these unexpected heartbeats. But why don't they form a cluster in the first place?
    If I understand the config correctly, we're using a ttl of 4, the default. I ran the multicast test and a ttl of 1 worked also. I think the join timeout is 30000.
    When the standalone node starts, it outputs a ttl of 4 and the expected cluster address and port to the log.
    One wrinkle in the config is that there are 2 applications deployed to the same weblogic jvm that both use Coherence. They are in separate classloaders and use unique cluster ports. This hasn't been a problem in the past. Now, however, my app is Coherence 3.5 and the other one is still 3.3.1. The Coherence jars are not shared and the startup params apply to both applications.
    In the past I've seen errors where 2 nodes weren't using the same coherence version, same cluster name, etc. but I don't see anything like that now.
    thanks
    john

    Hi John,
    The clustering technologies did not change between 3.3 and 3.5. The fact that you could establish a multicast best cluster in 3.3 and not in 3.5 is therefor quite odd. My initial guess would be that your network may be blocking certain multicast address/port ranges? Are you using the same multicast address and port as you'd successfully used in 3.3? Also please use this address and port when running the multicast test to make it as close as possible to the medium on which coherence is trying to operate.
    If none of these suggestions resolves the issue, can you please post the following:
    - multicast test output from all nodes running the test concurrently
    - coherence logs from all nodes, including startup, and panic
    - coherence operational configuration
    Regarding the mix of Coherence 3.3 and 3.5 in the same JVM. So long as they are classloader isolated and running on a different multicast address/port you should be fine. Note I'm suggesting that both the address and the port be different. Some OSs (Linux) has issues related to not taking the port into consideration during multicast packet delivery. It wouldn't hurt to try starting 3.5 without the 3.3 app running, just to ensure that it isn't causing your troubles in some unforeseen way.
    thanks,
    Mark
    Oracle Coherence

  • Meaning and coherence between DEBESTA_BWPROT[/E/H] Tables

    Hey together,
    DataSource 0UC_SALES_STATS_01 and related Extractor are working with the help of three tables. I still have some problems to understand them completly. My view and open question on them are as follows:
    1.) DBESTA_BWPROT
    DBESTA_BWPROT is an index table which holds relevant reconciliation keys for extraction. If a key is closed the next export to
    BW will bring him to BW. If a key in the table isn't closed BW will be unable to import depending data.
    Is this right? When is the table filled with data?
    2.) DBESTA_BWPROTE
    If data couldn't be exported although the related reconciliation key is closed, ít is stored in BWPROTE until the error is solved.
    The next delta load will catch the data?
    3.) DBESTA_BWPROTH
    The table contains informations about historical data. If data could have been exported, it can be found in this table.
    I don't think that my assumption for that is correct, particulary with regard to the field BWSS_STATU in BWPROTH. But which data is stored in this table?
    Thanks a lot for your help.

    Hi,
    As for as i see
    DBESTA_BWPROTH  This table holds the below information (data)
    Road request date
    Duration of the extraction
    Number of successfully loaded billing and invoicing documents
    Number of canceled extractions of invoicing documents
    Below table is the Index table (which not holds the Actual data ) it behaves differently based on the extraction type .
    Full Upload:
    All documents with a closed a reconciliation key are extracted in accordance with the selection. Use the full upload to extract all existing documents after connection of the DataSource.
    Documents with an open reconciliation key are saved in index table DBESTA_BWPROT for later processing.
    Initial Upload:
    All closed reconciliation keys in index table DBESTA_BWPROT are processed according to the initial selection and the associated documents are extracted. You should perform the initialization after the full upload, using the same selection. The initial run only extracts data if in the mean time associated reconciliation keys were closed between the full and initial upload, or if there are new documents.
    Delta Upload:
    All closed reconciliation keys are processed in accordance with the initial selections and the corresponding documents are extracted. This normally concerns new documents in productive operation.
    Also check the below tables wich holds actuall data about the datasource
    EVER
    ADRC
    EVER
    EANL
    ERCH
    ERDK
    EHAU
    ERCHZ
    ERDZ
    For more Information check the below link.
    http://help.sap.com/saphelp_tm60/helpdata/en/87/c299397f3d11d5b3430050dadf08a4/frameset.htm
    Regards,
    Satya

  • TopLink 11g and coherence integration

    TopLink 11g comes up with seamless integration with coherence. Integration options are discussed at http://www.oracle.com/technology/products/ias/toplink/tl_grid.html
    Can someone provide working examples of each configuration options:
    1. Oracle TopLink Grid Coherence Read
    2. Oracle TopLink Grid: Coherence Read/Write Configuration
    3. Oracle TopLink Grid: Using Coherence as a Shared L2 Cache
    Thanks in Advance.

    On the above listed page each of the configuration option listed links to a how-to showing the configuration in use.
    Doug

Maybe you are looking for