Doubt about durable subscribers

I read this in the weblogic docs
          For durable subscriptions, WebLogic JMS stores a message in a persistent
          file or database until the message has been delivered to the subscribers
          or has expired, even if those subscribers are not active at the time
          that the message is delivered. A subscriber is considered active if the
          Java object that represents it exists. Durable subscriptions are
          supported for Pub/Sub messaging only.
          Now how does the weblogic server know how many subscribers are there? To
          describe more accurately
          1. One weblogic server, say version 7.0. One topic has been configured
          along with
          the other required stuff.
          2. Suppose 3 subcribers connect to the server
          3. A message is sent to the topic.
          4. Now if I kill one of the subscribers then is weblogic going to wait
          to deliver to the killed subscriber before marking the message as
          delivered?
          Shiva.
          

WebLogic does not do anything different than any other
          JMS vendor. The behavior of durable subscriptions is
          explicitly defined in the JMS standards.
          The durable subscriber client explicitly creates "subscription"
          when it first comes up. The JMS server stores a record of this
          subscription both in memory and on disk. The subscription
          does not go away until the client explicitly calls
          "unsubscribe" to delete it. The subscription,
          as per the JMS spec, is uniquely identified by a combination
          of: the topic specified by the client, the connection-id
          specified by the client, and the subscriber-id specified by
          the client. The subscription is not identified in any
          way by "JVM id". All messages sent after the subscription
          is created accumulate, even if the client is disconnected.
          The client reconnects by specifying by subscribing with the
          same identifiers it used to so subscribe previously.
          Tom
          Shiva P wrote:
          > Guess my question is not clear. Here it is again.
          >
          > How does weblogic know that it has to wait for the durable subscriber? I mean
          > is it the JVM id of the subscriber or some other unique thing that weblogic
          > uses to know about all the durable subscribers?
          >
          > shiva.
          >
          > Tom Barnes wrote:
          >
          >
          >>Shiva P wrote:
          >>
          >>>I read this in the weblogic docs
          >>>For durable subscriptions, WebLogic JMS stores a message in a persistent
          >>>file or database until the message has been delivered to the subscribers
          >>>or has expired, even if those subscribers are not active at the time
          >>>that the message is delivered. A subscriber is considered active if the
          >>>Java object that represents it exists. Durable subscriptions are
          >>>supported for Pub/Sub messaging only.
          >>>
          >>>Now how does the weblogic server know how many subscribers are there? To
          >>>describe more accurately
          >>>
          >>>1. One weblogic server, say version 7.0. One topic has been configured
          >>>along with
          >>>the other required stuff.
          >>>2. Suppose 3 subcribers connect to the server
          >>>3. A message is sent to the topic.
          >>>4. Now if I kill one of the subscribers then is weblogic going to wait
          >>>to deliver to the killed subscriber before marking the message as
          >>>delivered?
          >>
          >>Yes, if the subscription is durable.
          >>
          >>Durable subscription continue to exist even if the consumer
          >>is killed (or closed). Messages continue to accumulate in
          >>them. This is the whole purpose of durable subscriptions. If
          >>you want the messages to go away when the consumer goes away
          >>do not use a durable subscriptions.
          >>
          >>Tom
          >>
          >>
          >>>Shiva.
          >>>
          >>
          >
          

Similar Messages

  • Weblogic admin - deleting multiple durable subscribers?

    I may be in the wrong forum, but there is a problem in our code that has resulted in over 700 inactive durable subscribers. From the weblogic console gui, I can only delete these one by one. Is there a way to delete multiple subscribers (perhaps from the command line) without deleting the JMSTopic. One by one will take quite a long time.
              thanks in advance - Dan

    If the store has no data that you care about, you could shutdown WebLogic and delete the store's file(s) (or database table(s) if its a JDBC store), then restart WebLogic.
              The durable subscriber delete is available on the runtime mbeans, so another option is to write a WLST script or java program to call the deletes. I don't personally have any sample code for this.
              Tom

  • Removing durable subscribers

    Is it possible to "purge" durable subscriptions established by JMS clients ? Any APIs available for this ?
    Here is the scenrio. My application has a unique ID. The application establishes durable subscribers to a few topics. The uniqe ID is used in the setClientID() of the topic connection. Based on the messages it receives, the application would create more durable subscribers . All the subscribers are
    created off of the same topic connection object.
    When I "uninstall" the application, I want to "unsubscribe" or "remove" all the subscriptions established by the application. All I know about the application is the Unique ID and I do not know the subscriber id of the individual durable subscribers.
    So, my question is, is it possible to "remove" all the subscription created by the application given the client id used in the application ?
    TopicSession.unsubscribe() wont work for me because I do not know the subscriber id.
    Is there a "Management" API availabe to to this ?

    There is no management API to do what you described.
    The MQ administration utilities do have support for purging
    and destroying durable subscribers though. See the
    'Managing Durable Subscriptions' section of chapter 6 in
    the MQ Administration Guide.
    imqcmd list dur -d topicName
    imqcmd purge dur -n durableName -c clientID
    imqcmd destroy dur -n durableName -c clientID
    hope this helps,
    -isa

  • Doubt about  a null value assigned to a String variable

    Hi,
    I have a doubt about a behavior when assigning a null value to a string variable and then seeing the output, the code is the next one:
    public static void main(String[] args) {
            String total = null;
            System.out.println(total);
            total = total+"one";
            System.out.println(total);
    }the doubt comes when i see the output, the output i get is this:
    null
    nulloneA variable with null value means it does not contains a reference to an object in memory, so the question is why the null is printed when i concatenate the total variable which has a null value with the string "one".
    Is the null value converted to string ??
    Please clarify
    Regards and thanks!
    Carlos

    null is a keyword to inform compiler that the reference contain nothingNo. 'null' is not a keyword, it is a literal. Beyond that the compiler doesn't care. It has a runtime value as well.
    total contains null value means it does not have memory,No, it means it refers to nothing, as opposed to referring to an object.
    for representation purpose it contain "null"No. println(String) has special behaviour if the argument is null. This is documented and has already been described above. Your handwaving about 'for representation purpose' is meaningless. The compiler and the JVM don't know the purpose of the code.
    e.g. this keyword shows a hash value instead of memory addressNo it doesn't: it depends entirely on the actual class of the object referred to by 'this', and specifically what its toString() method does.
    similarly "total" maps null as a literal.Completely meaningless. "total" doesn't 'map' anything, it is just a literal. The behaviour you describe is a property of the string concatenation operator, not of string literals.
    I hope you can understand this.Nobody could understand it. It is compete nonsense. The correct answer has already been given. Please read the thread before you contribute.

  • Query regarding durable subscribers in JMS Adapter (11g)

    Hi , I am facing some issue with using durable subscribers..
    I am doing a POC in which I have two consumers (consumer 1 and consumer 2) listening to a topic.
    On the server console, the monitor tab of the JMS topic shows number of consumers as 2 which is good.
    Also, when any message is put on the topic , BPEL processes for both consumers get triggered as expected.
    Now, I add durable subscriber ID as 123 to one of the consumers (consumer 1)
    After deploying the code, the count for the number of current consumers listening becomes 1.
    So I create a durable subscriber on the console with id as 123 after which the consumer count is back to 2.
    The issue however is that, now when I put a message on topic , only consumer 2 is triggered.
    The message for consumer 1 can be seen on the durable subscriber 123's page.
    My question is that, since the consumer is listening to the topic then why is the message going to the subscriber instead of creating instance ?
    I also tried following :
    1. created a new outbound connection pool in the resource adapter
    2. setting property FactoryProperties to value "ClientID=123"
    3. pointing the adapter connection factory to this new location
    but this is also not helping.
    Could anyone please help me out in this ?
    Thanks and Regards,
    Ketan

    did u have any luck? can u tell me steps

  • Doubt about Select statement.

    Hi folks!!
                 I have a few doubts about the select statements, it may be a silly things but its useful for me.
    what is   difference between below statment.
    1)SELECT * FROM TABLE.
    2)SELECT SINGLE * FROM TABLE
    3)SELECT SINGLE FROM TABLE.
    Hope i will get answer,thanks in advance.
    Regards
    Richie..

    Hi,
    try this and if possible use sap help.i mean place the cursor on select and press F1.
                 Types of select statements:
    1.     select * from ztxlfa1 into table it.
                 This is simple select statement to fetch all the data of db table into internal table it.
       2.   select * from ztxlfa1 into table it where lifnr between 'V2' and 'V5'.
            Thisis using where condition between v2 and v5.
      4. select * from ztxlfa1 where land1 = 'DE'. "row goes into default table work Area
      5. select lifnr land1 from ztxlfa1
            into corresponding fields of it   "notice 'table' is omitted
             where land1 = 'DE'.
              append it.
               endselect.
         Now data will go into work area. and then u will add it to internal table by     
            append statement.
      6.   Table 13.2 contains a list of the various forms of select as it is used with internal tables and their relative efficiency. They are in descending order of most-to-least efficient.
    Table 13.2  Various Forms of SELECT when Filling an Internal Table
    Statement(s)                                   Writes To
    select into table it                                    Body
    select into corresponding fields of table it   Body
    select into it                                    Header line
    select into corresponding fields of it           Header line
    7. SELECT VBRK~VBELN
           VBRK~VKORG
           VBRK~FKDAT
           VBRK~NETWR
           VBRK~WAERK
           TVKOT~VTEXT
           T001~BUKRS
           T001~BUTXT
        INTO CORRESPONDING FIELDS OF TABLE IT_FINAL
        FROM VBRK
        INNER JOIN TVKOT ON VBRKVKORG = TVKOTVKORG
        INNER JOIN T001 ON VBRKBUKRS = T001BUKRS
        WHERE VBELN IN DOCNUM AND VBRK~FKSTO = ''
       AND VBRK~FKDAT in date.
    Select statement using inner joins for vbrk and t001 and tvkot table for this case based on the conditions
    8. SELECT T001W~NAME1 INTO  TABLE IT1_T001W
    FROM T001W INNER JOIN EKPO ON T001WWERKS = EKPOWERKS
    WHERE EKPO~EBELN = PURORD.
    here selecting a single field into table it1_t001winner join on ekpo.
    9. SELECT BUKRS LIFNR EBELN FROM EKKO INTO CORRESPONDING FIELDS OF IT_EKKO WHERE     EBELN IN P_O_NO.
    ENDSELECT.
    SELECT BUTXT   FROM T001 INTO  IT_T001 FOR ALL ENTRIES IN IT_EKKO WHERE BUKRS = IT_EKKO-BUKRS.
    ENDSELECT.
    APPEND IT_T001.
    here I am using for all entries statement with select statement. Both joins and for all entries used to fetch the data on condition but for all entries is the best one.
    10. SELECT AVBELN BVTEXT AFKDAT CBUTXT ANETWR AWAERK INTO TABLE ITAB
                 FROM  VBRK AS A
                 INNER JOIN TVKOT AS B ON
                 AVKORG EQ BVKORG
                 INNER JOIN T001 AS C ON
                 ABUKRS EQ CBUKRS
                 WHERE  AVBELN IN BDOCU AND AFKSTO EQ ' ' AND B~SPRAS EQ
                 SY-LANGU
                 AND AFKDAT IN BDATE AND AVBELN EQ ANY ( SELECT VBELN FROM
                VBRP WHERE VBRP~MATNR EQ ITEMS ).
        Here we are using sub query in inner join specified in brackets.
    Thanks,
    chandu.

  • Doubt about Bulk Collect with LIMIT

    Hi
    I have a Doubt about Bulk collect , When is done Commit
    I Get a example in PSOUG
    http://psoug.org/reference/array_processing.html
    CREATE TABLE servers2 AS
    SELECT *
    FROM servers
    WHERE 1=2;
    DECLARE
    CURSOR s_cur IS
    SELECT *
    FROM servers;
    TYPE fetch_array IS TABLE OF s_cur%ROWTYPE;
    s_array fetch_array;
    BEGIN
      OPEN s_cur;
      LOOP
        FETCH s_cur BULK COLLECT INTO s_array LIMIT 1000;
        FORALL i IN 1..s_array.COUNT
        INSERT INTO servers2 VALUES s_array(i);
        EXIT WHEN s_cur%NOTFOUND;
      END LOOP;
      CLOSE s_cur;
      COMMIT;
    END;If my table Servers have 3 000 000 records , when is done commit ? when insert all records ?
    could crash redo log ?
    using 9.2.08

    muttleychess wrote:
    If my table Servers have 3 000 000 records , when is done commit ? Commit point has nothing to do with how many rows you process. It is purely business driven. Your code implements some business transaction, right? So if you commit before whole trancaction (from business standpoint) is complete other sessions will already see changes that are (from business standpoint) incomplete. Also, what if rest of trancaction (from business standpoint) fails?
    SY.

  • Doubts about BP number in SRM and SUS

    Hello everyone,
    I have some doubts about the BP number, especially for Vendors.
    I am working with the implementation of SRM 5.0 with SUS in an extended classic scenario. We will use one server for SRM and other for SUS. We will use the self registration for vendor (in SUS). My questions are:
    - Can I have the same BP number in SRM and SUS?? Or is it going to be different??
    - When a vendor accesses at the site to make a self registration in SUS, the information is sent to SRM as prospect (by XI) and there the prospect is changed as vendor? After that, is it necessary to send something from SRM to SUS again? (to change the prospect to vendor)
    - When is it necessary to replicate vendors from SRM to SUS??
    Thanks
    Ivá

    Dear Ivan,
    Here is answer to all your questions. Follow these steps for ROS configuration:
    Pls note:
    1. No need to have seperate clients for ROS and SUS. Create two clients for EBP and (SUS+ROS).
    2. No need of XI to transfer new registered vendor from ROS to EBP
    Steps to configure scenario:
    1. Make entries in SPRO --> "Define backend system" on both clients.
        You will ahev specify logical systems of both the clients (ROS as well as EBP)
    2. Create RFCs on both clients to communicate with each other
    3. In ROS client create Service User for supplier registration service with roles:
        SAP_EC_BBP_CREATEUSER
        SAP_EC_BBP_CREATEVENDOR
        Grant u201CS_A.SCONu201D profile to the user.
    4. Maintain service user in u201CLogon Datau201D tab of service : ros_self_reg in ROS client
    5. Create Purchasing and vendor Organizational Structure in EBP client and maintain necessary
        attributes. create vendor org structure in ROS client
    6. Create your ROS registration questionnaires and assign to product categories- in ROS client
    7. To transfer suppliers from registration system to EBP/Bidding system, Supplier pre-screening has to be
        defined as supplier directory in SRM server - EBP client.
        Maintain your prescreen catalog in IMG --> Supplier Relationship Management u2192 SRM Server u2192
        Master Data u2192 Define External Web Services (Catalogs, Vendor Lists etc.) 
    8. Maintain this catalog Id in purchasing org structure under attribure "CAT" - in EBP client
    9. Modify purchaser role in EBP client:
        Open node for u201CROS_PRESCREENu201D and maintain parameter "sap-client" and ROS client number
    10.Maintain organizational data in make settings for business partner
    Supplier Relationship Management -> Supplier Self-Services -> Master Data -> Make Settings for the Business Partners. This information is actually getting getting stored in table BBP_MARKETP_INFO.
    11. Using manage Business partner node with purchasers login (BBPMAININT), newly registsred vendors are pulled from Pre-screen catalog and BP is created in EBP client. If you you have SUS scenario, ensure to maintain "portal vendor" role here.
    I hope this clarifies all your doubts.
    Pls reward points for helpful answers
    Regards,
    Prashant

  • Doubt about uses of OBIEE

    I have some doubts about the possible uses of OBIEE. It happens that using OBIEE sometimes users demand report of an "analytical" type, that is aggregated analysis through OBIEE’s Answers, selecting data from dimension tables and measures from fact tables. That’s the ordinary purpose of business intelligence tools!!!
    Some other times though, users demand to perform through Answers analyses of an "operating" type, that is simple extractions of some fields belonging to dimension tables, linked between each other through joins, (hence without querying fact tables): that happens because some of the tables brought in the datawarehouse are not directly linked to any fact table. In this way users want to use Answers to visualize data even for this kind of extractions (or operating reports).
    Is this a correct use of the tool or is it just a “twisted” way of using it, always leading eventually to incorrect extractions? If that’s the case, is it possible to use instead BI Publisher, extracting the dataset through the "Sql Query" mode in a visual manner? The problem of the latter solution, in my case, relies in the fact that users are not enough skilled from the technical point of view: they would prefer to use Answers for every extraction, belonging both to the first type (aggregations) and the second one (extractions), that I just described. Can you suggest a methodology to clarify this situation?

    Hi,
    I understand your point... But I think OBIEE doesn't allow having dimension "on their own", they must be joined to a fact table somehow. This way, when you do a query in answers using fields of two dimension tables a fact table should be always involved. When dimensions are conformed, several fact tables may be used, and OBIEE uses the "best" one in terms of performance. However, there are some tricks that you can do to make sure a particular fact table is used, like using the "implicit fact column" in the presentation layer.
    So back to your point, using OBIEE for "operational" reporting as you call it is a valid option in my experience, but you have to make sure that the underlaying star schema supports the logic that your end users expect when they use just dimension fields.
    Regards,

  • Doubts about use of REPORTS_SERVERMAP with Forms11g HA

    Hi,
    I'm configuring a Linux 64bits Forms/Reports 11g HA environment, the point is that i have two nodes, each one with its Forms and Reports servers, let's say FormsA and ReportsA for the first node and FormsB and ReportsB for the seconde node.
    i want FormsA to be able to call reports from ReportsB and FormsB to be able to call reports from ReportsA.
    I've been reading about REPORT_SERVERMAP
    http://docs.oracle.com/cd/E12839_01/bi.1111/b32121/pbr_conf003.htm#autoId5
    But i have some doubts about its use:
    1. I will not use a shared cluster file system or any way of cache solution, i will only have my rdf files on each node, and i'm wondering if just by configuring this parameter i will be able to get the effect mentioned above ??
    2. The link provided says "Using RUN_REPORT_OBJECT. If the call specifies a Reports Server cluster name instead of a Reports Server name, the REPORTS_SERVERMAP environment variable must be set in the Oracle Forms Services default.env file"
    In fact i'm using RUN_REPORT_OBJECT but
    what is the Reports Server cluster name ?? and where do i find that name ??
    3. Is this configuration well defined:
    REPORTS_SERVERMAP=clusterReports:ReportsA;clusterReports:ReportsB
    4. At forms applications when using RUN_REPORT_OBJECT, can i assume that the report server name will be the cluster name specified at the REPORTS_SERVERMAP ??
    5. Which files should i modify rwservlet.properties or default.env ??
    Hope you can help me :)
    Regards
    Carlos

    Hi,
    1. I will not use a shared cluster file system or any way of cache solution, i will only have my rdf files on each node, and i'm wondering if just by configuring this parameter i will be able to get the effect mentioned above ??
    --> In such case what could go wrong is
    Suppose Run_report_object executed jobs successfully to ReportsA
    But web.show_document command for getjobid failed ( as ReportsA went down by this time)
    --> You will not get the output shown ( though job was successful)
    If shared cache was enabled, then Even if ReportsA is down, other cluster member ( say ReportsB)
    will respond back to web.show_document.
    Point 2,
    --> Under HA is it highly recommended to use web.show_document ( a servlet call) to execute reports. This is to help use all HA features at the HTTP , Webcache or load balancer level.
    However if there is migrated code or Run_report_object is must, then the recommendations as you see in the pointed document is must.
    REPORTS_SERVERMAP setting needs to be configured in rwservlet.properties file and also in default.env Forms configuration file to map the Reports Server cluster name to the Reports Server running on the mid-tier where the Load Balancer forwarded the report request.
    For example FormsA, ReportsA, cluster name say rep_cluster
    default.env file
    REPORTS_SERVERMAP=rep_cluster:ReportsA
    Where "rep_cluster" is the Reports Server cluster name and "ReportsA" is the name of the Reports Server running on the same machine as FormsA
    rwservlet.properties file
    <reports_servermap>rep_cluster:ReportsA</reports_servermap>
    At default.env this is not a valid entry
    REPORTS_SERVERMAP=clusterReports:ReportsA;clusterReports:ReportsB
    what is the Reports Server cluster name ?? and where do i find that name ??
    --> This is created via EM on the report server side.
    Would recommend to refer following documents at the myoracle support repository
         How to Setup Reports HA (High Availability - Clusters) in Reports 11g [ID 853436.1]
         REP-52251 and REP-56033 Errors When Calling Reports From Forms With RUN_REPORT_OBJECT Against a Reports Cluster in 11g. [ID 1074804.1]
    Thanks

  • Doubt about proxies implementation

    hi experts i have small doubt about proxies implementation
    1. if we r implementing client proxies, it means sap r/3(proxy)->>xi->>>file
         system.here where we have to execute the SPROXY  transaction. in sap r/3 or
         in the xi server.and the next thing is where we have to write the report program
         to trigger the interface.in sap r/3 or in the xi server.
    2. if we r implementing server proxies, it means File->>xi->>>sap r/3
        (proxy).here where we have to execute the SPROXY  transaction. in sap r/3 or
         in the xi server.
    please clear me
    Regards
    giri

    Sreeram,
    The Integration Server and the client on which you generate the proxies should not be the same. If they are different then yes, you can use another client in your XI box itself to generate proxies and trigger the call to XI.
    If you see this blog by Ravi ( incidentally he is my boss as well ) this is exactly what we have done as well.
    /people/ravikumar.allampallam/blog/2005/03/14/abap-proxies-in-xiclient-proxy
    When you say XI, you mean the Client on which the Integration Server is running! XI is basically a R3 instance with more functionality and its own Integration Engine.
    Regards
    Bhavesh

  • Doubt about unicode conversion

    Dear Friends,
    I have a doubt about the unicode conversion. I have read (one server strategy) that I have to do export (R3load), delete the ECC 6.0 non-unicode, install an unicode database and then import with R3load. My doubt is about the deletion of ECC 6.0 non unicode. Why?  can I convert to unicode only with an export (R3load) and subsequent import (R3load) without any deletion?. Obviously I must to change the SAP kernel with a unicode one. Is it correct?
    Cheers

    you can not convert a SAP System to unicode by just exchanging the executables from non unicode to unicode ones.
    a non unicode SAP Oracle database is typically running with a database codepage WE8DEC or US7ASCII (well this one is out of support).
    so every string stored in the database is stored using this codepages or SAP-Internal codepages.
    When converting to unicode you have to convert also the contents of the database to unicode. As unicode implementation starts at a time where Oracle did not support mixed codepage databases (one tablespace codepage WE8DEC the other one UTF8) inplace conversions are not possible. To keep things simpler, we still do not support mixed codepage databases.
    Therefore you have to export the contents of your database and import it to a newly created database with a different codepage if you want to migrate to unicode.
    regards
    Peter

  • Doubt about Patch

    Hi everyone!
    I have a doubt about applying patch.
    I just applied this patch (using GUI):
    Oracle® Database Patch Set Notes
    *10g Release 2 (10.2.0.4) Patch Set 3 for Microsoft Windows (32-Bit)*
    Initially, there was an error stating that some other process is accessing one particular dll.
    I ignored it.
    I rerun the patch installation thinking that I need to get that dll as well.
    It gave a list of what has been installed and what can be installed.
    So, I selected all the remainders and installed.
    It was successful. Now, would it have ignored that dll or installed it?
    Please advice.
    Thanks in advance.
    Cheers!
    Nith
    Edited by: user645399 on Nov 26, 2010 10:24 AM

    user645399 wrote:
    Hi everyone!
    I have a doubt about applying patch.
    I just applied this patch (using GUI):
    Oracle® Database Patch Set Notes
    *10g Release 2 (10.2.0.4) Patch Set 3 for Microsoft Windows (32-Bit)*
    Initially, there was an error stating that some other process is accessing one particular dll.
    I ignored it.
    I rerun the patch installation thinking that I need to get that dll as well.
    It gave a list of what has been installed and what can be installed.
    So, I selected all the remainders and installed.
    It was successful. Now, would it have ignored that dll or installed it?
    Please advice.
    Thanks in advance.
    Cheers!
    Nith
    Edited by: user645399 on Nov 26, 2010 10:24 AMEvenif you've stopped all the windows services, sometimes some DLLs remains utilized. What I do is, make all the oracle related services manual and then restart the server.
    You can't ignore any dll related errors. If a dll is in use, oracle patching process will not replace that with a newer version of dll. I'll strongly recommend the use of ' Microsoft Process Monitor' utility to monitor, what is running on my server.
    http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx
    Regards,
    S.K.

  • Durable subscribers questions

    Hello,
              I am building an application that has two WebLogic servers
              communicating certain information between them using JMS. I am using
              WebLogics implementation of JMS including destinations and messaging
              bridges.
              I am experiencing some problems.
              Server 1 takes files and puts them in a Stream Message and publishes
              them to a topic as shown here:
              Context context = new InitialContext();
              TopicConnectionFactory topicFactory = (TopicConnectionFactory)
              context.lookup(sourceDescriptor.getConnectionFactory());
              connection = topicFactory.createTopicConnection();
              session = connection.createTopicSession(false,
              Session.CLIENT_ACKNOWLEDGE);
              Topic topic = (Topic)
              context.lookup(sourceDescriptor.getTopicJNDIName());
              TopicPublisher publisher = session.createPublisher(topic);
              publisher.setDeliveryMode(DeliveryMode.PERSISTENT);
              StreamMessage message = session.createStreamMessage();
              for (int i = 0; i < files.length; i++)
              Loop through reading a set of files into the StreamMessage publishing
              each.
              publisher.publish(message);
              Topic config.xml
              <JMSTopic JNDIName="topic/topicName"
              Name="Topic Name" StoreEnabled="default"/>
              Destination config.xml
              <JMSBridgeDestination
              AdapterJNDIName="eis.jms.WLSConnectionFactoryJNDINoTX"
              ConnectionFactoryJNDIName="ConnectionFactory"
              ConnectionURL="t3://localhost:7001"
              DestinationJNDIName="queue/queueName" Name="Queue Destination
              Name"/>
              <JMSBridgeDestination
              AdapterJNDIName="eis.jms.WLSConnectionFactoryJNDINoTX"
              ConnectionFactoryJNDIName="ConnectionFactory"
              ConnectionURL="t3://machinename:8001"
              DestinationJNDIName="topic/topicName"
              DestinationType="Topic" Name="Topic Source Name"/>
              Messaging Bridge config.xml
              <MessagingBridge
              Name="The Bridge"
              QOSDegradationAllowed="true" QualityOfService="Duplicate-okay"
              SourceDestination="Topic Source Name"
              TargetDestination="Queue Destination Name" Targets="server1"/>
              Connection Factory
              <JMSConnectionFactory JNDIName="ConnectionFactory"
              Name="Server1 Connection Factory" Targets="server1"/>
              All settings in WebLogic are default for the configuration of this as
              far as I know.
              The bridge is using Duplicate Ok QOS at this point.
              This works like a charm as long as server which has the destination
              queue is up and running and has received messages in the past.
              The first problem is the first set of messesages are sent to the Queue
              on Server 2 but the MDB does not pick them up. I have left it running
              for an hour and these messages sit in the pending state for the
              monitoring of the Queue. In order to get this first set of messages
              to be processed I have to restart Server 2 at which point the initial
              messages are processed. Any subsequent messages are processed by the
              MDB immediately upon delivery. What is the cause of this and is there
              any way around having to restart after the first messages are received
              creating the file in the message store? Does this have anything to do
              with the fact that there is no file in the message store until the
              first messages are received and thus the MDB is not looking since
              there was nothing there on start up? This seems very odd....
              The second problem has to do with durability. I start Server 1 and it
              waits for a finish file and when it sees it it takes the files and
              puts them on the topic as described. As long as Server 2 is up and
              running at that time everything is fine. If However Server 2 is not
              running at that moment and is started at a later time then the
              messages seem to be lost. What could cause this? Is there any way I
              can visually see that the message is being held on Server 1? The
              bridge from this topic on Server 1 to the queue on Server 2 is
              configured on Sever 1 and you can see that it is working when Server 2
              is not running due to the connection refused messages the bridge
              throws. Is the bridge accepting the messages from the topic enough
              for the topic to decide it is done with them and delete them? Could
              the acknowledgement from the queue on another bridge for the same
              topic be confusing the topic and deleteing the message before all
              subscribers get it? This is a big issue since it seems that durabilty
              is only guarenteed when both servers are running and a message is
              created.
              Any help on this would be appreciated...If you wouldn't mind cutting
              and pasting any responses to my email as well it would be appreciated
              as I use google for groups and it has a long turn around.
              TIA...
              

    Hi Kartheek,
    Once we create a TOPIC in NWA.In durable subscribers tab subscription ID and Subscription name will be automatically created once you publish the message in to TOPIC.
    check in NWA-->jms server configuration-->durable subscribers.
    thanks
    Purna

  • Doubt about Scan and Update Catalog Objects That Require Updates link

    Hi,
    I have a doubt about 'Scan and Update Catalog Objects That Require Updates' link in Administration,
    how can I know how many objects that required upgrading before I click this link???
    in doc.
    http://docs.oracle.com/cd/E28280_01/bi.1111/e10541/prescatadmin.htm#BIESG3750
    section 17.2.4 Updating Catalog Objects
    It is said 'You can confirm the need to update by viewing the metrics in Fusion Middleware Control. In the Catalog folder, find a metric called "Reads Needing Upgrade" with description "The number of objects read that required upgrading." '
    but I don't find it . my OBIEE version :11.1.1.6.2
    conld you pleae help me ??
    thank you in advance.

    That link should be there in 6 version.
    I've verified in 11g6 version doc the same is existing
    ref: http://docs.oracle.com/cd/E23943_01/bi.1111/e10541/prescatadmin.htm#BAJDDFFI
    BTW:
    http://docs.oracle.com/cd/E28280_01/bi.1111/e10541/prescatadmin.htm#BIESG3750is 11g7 version
    Thanks,
    http://cool-bi.com

Maybe you are looking for