JMS Best Practice

We are starting to use JMS more and more heavily for back end sync and replication of changes.
What I want to know is how should I use connections and sessions? These processes are always running, but they seem to time out.
Should I be opening new connections each time, or just sessions, or even just the senders or publishers? I haven't done much benchmarking to see if this is bad, but taking the idea from JDBC we have a JMSBroker that manages the connections.
I also notice that there is no obvious way to check if stuff is open using the API, so I am forced to try it, catch the exception and the re-connect if required. This sort of causes loops and weird conditions when there is a serious issue (like JMS server down, or network issue) as the system thinks its just a timed out connection.
Any thought on this, or pointers to JMS Best Practices?
Steve

I think best practices would be to instantiate a single Connection, a single Session, and finally a single MessageConsumer. If you are sending messages, do the same but in the end create a single MessageProducer. If you are consuming and producing, make sure to use the same Session for creating both consumer and producer so both receiving and sending are all wrapped in a single transaction. XA can also be used if you want a third party to manage the JMS transaction with say a database transaction.
You can also register an ExceptionListener on the Connection which may get around the timeout issue.
Of course, as I write this I realize that some of this assumes an ideal world. If your JMS provider has some odd timeout behavior and you don't have a high volume of messages, just keep the administrative objects (ConnectionFactories, Destinations) around and recreate everything else for each batch of messages sent. Do what works now, leave the optimizations for when they are needed.
Dwayne

Similar Messages

  • Having multiple service operations in a single JMS adapter best practice

    Hi All,
    I am using JDeveloper and SOA Suite 11.1.1.6. I need to read from multiple JMS Topics and transform and enhance the messages through the Mediator and then persist into Database.
    My question is
    What is the best practice to consume from multiple topics, Should I configure separate JMS adapter for each of the Topic destination
    OR
    have a single JMS adapter with multiple operations by manually changing the JMS adapter wsdl and jca file?
    I find cannot have
    Please suggest.
    Thanks in advance
    Edited by: user5108636 on 15/05/2013 11:36

    Hi Vijay,
    did you actually test this? When I finish creating a DBAdapter, there is a operation present. Then when I click edit again on the DBAdapter, and I create another select, when I finish only the first operation is gone, and I can only see the one I've created via the last edit.
    I dont understand your reply. Can I have two operations, each one with select underneath, in the same adapter?
    Edited by: user13604541 on Jan 30, 2012 11:19 AM

  • Best Practices for JMS Service Documentation

    Our software consists of a variety of JMS producers and consumers used to transform and transmit business to business messages on a large scale.
    The service nodes do a variety of things, and we've had trouble over the years ensuring every queue-based service clearly identifies the parameters and payload it accepts, so that everyone from programmers to system administrators can easily see what services are available and how they are to be used.
    Some have advocated always adding a web service in front of each message-based service to guarantee interface contracts are all well publicized. I think there are reasons to choose web services and reasons to choose message-based services, and am not convinced this is the right answer to address limitations in expressing the design contract for a message-based service. That said, I really like what we've been able to do with self-documenting web services based on annotations, and wonder if there's a conceptual equivalent in message-based software.
    What are your best practices for ensuring your message-based services are as self-documenting as your modern web service?
    Thanks in advance for your advice!
    Edited by: lsamaha on Apr 16, 2012 11:46 AM

    *bump                                                                                                                                                                                                                                                           

  • JMS / MBD Best Practices

    Hello All,
    There is currently a dispute in my camp about the limit of "responsibility" to place on a single MDB. In this case, the argument surrounds whether it is "good practice" to allow a MDB's onMessage function to:
    - Open a connection to a servlet (the URL object obtained via Connection Manager via JNDI)
    - Post some data
    - Read the response and log it to local DB via JDBC
    Firstly, is this too much responsibility for a single MDB?
    Secondly, can someone please point me in the direction of some good MDB "Best Practices" documentation out there, free or otherwise.
    I'm looking forward to settling this dispute.
    Thank you all in advance.

    Hi,
    A MDB may become one of your application bottlenecks if it has to perform a time expensive operation. In such a case it may result of messages pilling up in your destination and dependent process blocking for an asynchronous event. But however, if your MDB work is isolated then its message throughput should not impact the overall application speed and can therefore be delegated expensive computation (assuming that the number of messages waiting to be processed is reasonable).
    In your particular case, I would say that you should be ok assuming that your servlet doesn�t suffer too long downtimes. Depending of your expected traffic you will have to deploy the right number of servelet and MDB instances.
    Hope this helps
    Arnaud

  • Need advise for best practice when using Toplink with external transaction

    Hello;
    Our project is trying to switch from Toplink control transaction to using External transaction so we can make database operation and JMS operation within a single transaction.
    Some of our team try out the Toplink support for external transaction and come up with the following initial recommendation.
    Since we are not familar with using external transaction, I would like member of this forum and experts, to help comment on whether these recommendation are indeed valid or in line with the best practice. And for folks that have done this in their project, what did you do ?
    Any help will be most appreciated.
    Data Access Objects must be enhanced to support reading from a TOPLink unit of work when using an external transaction controller. Developers must consider what impact a global transaction will have on the methods in their data access objects (DAOs).
    The following findSomeObject method is representative of a “finder” in the current implementation of our DAOs. It is not especially designed to execute in the context of a global transaction, nor read from a unit of work.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    ClientSession clientSession = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    obj = (SomeObject)clientSession.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    clientSession.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    However, after making the following changes (in blue) the findSomeObject method will now read from a unit of work while executing in the context of a global transaction.
    public findSomeObject(ILoginUser aUser, Expression queryExpression)
    Session session = getClientSession(aUser);
    SomeObject obj = null;
    try
    ReadObjectQuery readObjectQuery = new ReadObjectQuery(SomeObject.class);
    readObjectQuery.setSelectionCriteria(queryExpression);
    if (TransactionController.getInstance().useExternalTransactionControl())
         session = session.getActiveUnitOfWork();
         readObjectQuery.conformResultsInUnitOfWork(); }
    obj = (SomeObject)session.executeQuery(readObjectQuery);
    catch (DatabaseException dbe)
    // throw an appropriate exception
    finally
    if (TransactionController.getInstance().notUseExternalTransactionControl())
         session.release();
    if (obj == null)
    // throw an appropriate exception
    return obj;
    When getting the TOPLink client session and reading from the unit of work in the context of a global transaction, new objects need to be cached.
    public getUnitOfWork(ILoginUser aUser)
    throws DataAccessException
         ClientSession clientSession = getClientSession(aUser);
         UnitOfWork uow = null;
         if (TransactionController.getInstance().useExternalTransactionControl())
              uow = clientSession.getActiveUnitOfWork();
              uow.setShouldNewObjectsBeCached(true);     }
         else
              uow = clientSession.acquireUnitOfWork();
         return uow;
    }

    As it generally is with this sort of question there is no exact answer.
    The only required update when working with an External Transaction is that getActiveUnitOfWork() is called instead of acquireUnitOfWork() other than that the semantics of the calls and when you use a UnitOfWork is still dependant on the requirements of your application. For instance I noticed that originally the findSomeObject method did not perform a transactional read (no UnitOfWork). Has the requirements for this method changed? If they have not then there is still no need to perform a transactional read, and the method would not need to change.
    As for the requirement that new object be cached this is only required if you are not conforming the transactional queries and adds a slight performance boost for find by primary key queries. In order to use this however, objects must be assigned primary keys by the application before they are registered in the UnitOfWork.
    --Gordon                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

  • Best practice for data persistance for monitoring without BAM

    Greetings,
    We are modeling a business process in a large organization using BPEL Process Manager. The key point is that business people needs to monitor the execution of the business process in several key sectors of the process execution as well as they need to get report information of the process.
    To model this in our project, we decided to create a new Oracle Database Schema that is going to hold the information about the business process execution (we decided that because for this initial offering the customer is not buying BAM). In this context, the BPEL process is going to be sending this key information to the repository so business people can then view real time information about the process execution as well as historical information in form of reports.
    The important issue here is, if there is a best practice to send the information to the Database Schema ? it could be just using single database adapters ? maybe using sensors sending the data using topics connections ?
    Any help will be highly appreciated.
    Thanks in advance.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Best practice for remote topic subscription with HA

    I'd like to create an orchestrator EJB, in cluster A, that must persist some data to a database of record and then publish a business event to a topic. I have two durable subscribers, MDBs on clusters B & C, that need to receive the events and perform some persistence on their side.
              I'd like HA so that a failure in any managed server would not interrupt the system. I can live with at least once delivery, but at the same time I'd like to minimize the amount of redundant message processing.
              The documentation gets a little convoluted when dealing with clustering. What is the best practice for accomplishing this task? Has anyone successfully implemented a similar solution?
              I'm using Weblogic 8.1 SP5, but I wouldn't mind hearing solutions for later versions as well.

    A managed server failure makes that server's JMS servers unavailable, which, in turn, makes the JMS server's messages unavailable until the either (A) the JMS server is migrated or (B) the managed server is restarted.
              For more discussion, see my post today on topic "distributed destinations failover - can't access messages from other node". Also, you might be interested in the circa 8.1 migration white-paper on dev2dev: http://dev2dev.bea.com/pub/a/2004/05/ClusteredJMS.html
              Tom

  • OBIEE Best Practice Data Model/Repository Design for Objectives/Targets

    Hello World!
    We are faced with a design question that has become somewhat difficult and we need some help. We want to be able to compare side-by-side actual measures with their corresponding objectives/targets. Sounds simple. But, our objectives are static (not able to be aggregated) with multi-dimensionality and multi-levels. We need some best practice tips on how to design our data model and repository properly so that we can see the objective/target for a measure regardless of the dimensions that are used in the criteria and regardless of the level.
    Here is some more details:
    Example of existing objective table.
    Dimension1
    Dimension2
    Dimension3
    Obj1
    Obj2
    Quarter
    NULL
    NULL
    NULL
    .99
    1.8
    1Q13
    DIM1VAL1
    NULL
    NULL
    .99
    2.4
    1Q13
    DIM1VAL1
    DIM2VAL1
    NULL
    .98
    2.41
    1Q13
    DIM1VAL1
    DIM2VAL1
    DIM3VAL1
    .97
    2.3
    1Q13
    DIM1VAL1
    NULL
    DIM3VAL1
    .96
    1.9
    1Q13
    NULL
    DIM2VAL1
    NULL
    .97
    2.2
    1Q13
    NULL
    DIM2VAL1
    DIM3VAL1
    .95
    2.0
    1Q13
    NULL
    NULL
    DIM3VAL1
    .94
    3.1
    1Q13
    - Right now we have quarterly objectives set using 3 different dimensions. So, if an author were to add one or more (or zero) dimensions to their criteria for a given measure they could get back a different objective. They could add Dimension1 and get 99%. They could add Dimension1 and Dimension2 and get 98%. They could add all three dimensions and get 97%. They could add zero dimensions (highest grain) and get 99%. Using our existing structure if we were to add a new dimension to the mix the possible combinations would grow dramatically. (Not flexible)
    - We would like our final solution to be flexible enough so that we could view objectives with altogether different dimensions and possibly get different objectives.
    - We currently have 3 fact tables with 3+ conformed dimension tables and a few unique dimension tables.
    Could anyone share a similar situation where you have implemented a data model structure with the proper repository joins to handle showing side-by-side objectives/targets where the objectives were static and could be displayed at differing levels with flexible dimensions as described?
    Any help would be greatly appreciated.

    hi..yes this suggestion is nice...first configure the sensors(activity or variable) ..then configure the sensor action as a JMS Topic which will in turn insert the data into a DB..Or when u configure the sensor action as a DB..then the data goes to Oracle Reports schema..if there is any chance of altering the DB..i mean if there is any chance by changing config files so that the data doesnt go to that Reports schema and goes to a custom schema created by any User....i dont know if it can b done...my problem is wen i m configuring the jms Topic for sensor actions..i see blank data coming..for sm reason or the other the data is not getting posted ...i have used a esb ..a routing service based on the schema which i am monitoring...can any1 help?

  • Is this a best practice of BAM implementation?

    Hello everyone:
    Currently we have done an Oracle BAM implementation. To explain briefly our implementation:
    We have an Oracle Database 8.1.7, were all transactions are recorded. We tried using JMS to import the data into data objects in the Oracle BAM respository. We did this by using a database link to a Oracle Database 10G and then through Advanced Queueing. This did not work due to performance issues. The AQ messages were not consumed as fast as they were produced, so there was no real time data.
    Then we developed a Java component to read the table in the Oracle Database 10g and started using batch upserts into the Oracle BAM through the web services API provided. This solved the performance issue mentioned above.
    Currently we are using all the data procesing in the Oracle 10G database through PL/SQL stored procedures, data mining is applied on the transactions and the summary information is collected intro several tables. This tables are updated and then imported into the Oracle BAM data objects.
    We have noticed, that Oracle BAM has some performance issues when trying to view a report based on a data object with large number of records. Is this really an issue on Oracle BAM? The average number of transactions is 200,000 records. How can we solve this issue?
    Another issue we want to expose is. When viewing reports through the browser, and the browser hangs or suddenly closes. Sometimes the Active Data Cached Feed window hangs or doesn´t close. When this happens, an we try to open another report, the report never displays. Is this a browser side issue or server side issue?
    The Oracle BAm is installed on a Blade with 2X2 Xeon Procesors (4 cpus), 16GB RAM and Windows Server 2003 Enterprise Ed. with SP2.
    How can we get a tuning guide based on best practices?
    Where can we get suggestions about our implementation?
    Thanks to anyone who can help us.

    Even i am facing similar issue. Any pointers would be appreciated.
    Thanks.

  • Best practice for OSB to OSB communication

    Cross posting this message:
    I am currently in a project where we have two OSB that have to communicate. The OSBs are located in different security zones ("internal" and "secure"). All communication on a network level must be initiated from the secure zone to the internal zone. The message flow should be initated from the internal zone to the secure zone. Ideally we should establish a tcp connection from the secure zone to the internal zone, and then use SOAP over HTTP on this pre-established connection. Is this at all possible?
    Our best approach now, is to establish a jms-queue in the internal zone and let both OSBs connect to this. All communication between the zone is then done over JMS. This is an approach that would work, but is it the only solution?
    Can the t3/t3s protocol be used to achieve our goal? I.e. To have synchronous commincation over a pre-established connection (that is established the in opposite direction of the communication)?
    Is there any other approach that might work?
    What is considered best practice for sending messages from a OSB to another OSB in a more secure zone?
    Edited by: hilmersen on 11.jun.2009 00:59

    Hi,
    In my experience in a live project, we have used secured communication (https) between internal service bus and DMZ/external service bus.
    We also used two way SSL with customers.
    The ports were also secured by firewall in between them.
    If you wish more details, please email [email protected]
    Ganapathi.V.Subramanian[VG]
    Sydney, Australia
    Edited by: Ganapathi.V.Subramanian[VG] on Aug 28, 2009 10:50 AM

  • Best Practice on Updating From a DB

    Hi Everyone,
    What are some best practices surrounding getting data from an oracle database into the cache layer when a data change event (insert, update, delete) happens? I've searched far and wide and the best answer I can find is to use Extractor/Replicator -> JMS -> Subscriber -> cache.
    Thank you for your help.

    You're right, DCN is interesting idea, but it's again the case where technology is working on simple Hello World things, but fails to deliver on real word.
    To me DCN looks like an unfinished Oracle project, lot of marketing stuff, but poor features, it's good mostly to student's works or testlabs, but not for real world complexity.
    Two reasons:
    1.DCN has severe limitations on complexity of joins and queries in case you plan to use query change notification feature.
    2. it puts too bug pressure on database by creating a tons on events, when you don't need and don't expect them, because it's too generic.
    Instead of DCN, create ordinary Oracle AQ queues, using tiny SQL object type event as a payload, then create triggers and/or PL/SQL stored procedures, which ale filling the event with all the primary keys you need and the unique ID of the object you need to extract.
    Triggers will filter out unnesessary updates, sending events only when you wish.
    If conditions are too complex for triggers, you may create & place events either by call from the event source app itself or on scheduled basis, it's entirely up to you. Also, technique with creating object views + using instead of trigger on this object view works pretty well.
    And finally, implement listener at Coherence side, which will be reading the event, making necessary extracts & assemble Java object ready to be placed into the cache, based on the event ID and set of event's primary keys. After Java object will be assembled, you can place it into the cache.
    Don't use Hibernate, TopLink or any other relational-to-object frameworks, they're too slow and add excessive and unnecessary overhead to the process, use standard Oracle database features, they're much faster and transaction-safe. Usage of these frameworks within 10g or 11g database is obsolete and caused mainly by lack of knowledge among Java developers about database features on this regard.
    In order to make a whole system fail-safe and scalable, you have to implement listener in fail-safe fashion, in a form of workmanager + slave processes, spawned on the other nodes.Work manager has to be auto fail-safe and auto scalable, so that if the node holding work manager instance fails due to cache cluster member departure or reset or smth else, another workmanager is automatically spawned on first available node.
    Also, workmanager should spread & synchronize the work among the slave listener processes based on the current cache cluster members, automatically re-balancing and recovering work in case of cache member join/departure.
    Out-of-the box Coherence has an implementation of workmanager, but it's not fail-safe and does not provide automatic scale-up/recover work features described above, so you have to implement your own.
    All the features I've described are implemented and happily used in complex OLTP + workflow system backed up by big Oracle RAC cluster with huge workload, processing millions transactions per day.

  • Best Practice type question

    Our environment currently has 1 Connection factory defined per JMS Module. We also have multiple queues per JMS Module.
              In other J2EE AppServer environments I have worked in, we defined 1 connection factory per queue.
              Can someone explain if there a best practice, or at least a good reason for doing one over the other.
              The environment here is new enough that we can change how things are set up if it makes sense to do so.

    My two cents: A CF allows configuration of client load-balancing behavior, flow-control behavior, default QOS, etc. I think its good to have one or more CFs configured per module, as presumably all destinations in the module are related in some way, and they therefore likely all require basically the same client behavior. If you have very few destinations, then it might help to have one CF per destination, but this places a bit more burden on the administrator to configure the extra CFs in the first place, and on everyone to remember which CF is best for communicating with which destination.
              Tom

  • Best Practices on OWB/ODI when using Asynchronous Distributed HotLog Mode

    Hello OWB/ODI:
    I want to get some advice on best practices when implementing OWB/ODI mappings to handle Oracle Asynchronous Distributed HotLog CDC (change data capture), specifically for “updates”.
    Under Asynchronous Distributed HotLog mode, if a record is changed in a given source table, only the column that has been changed is populated in the CDC table with the old and new value, and all other columns with the exception of the keys are populated with NULL values.
    In order to process this update with an OWB or ODI mapping, I need to compare the old value (UO) against the new value (UN) in the CDC table. If both the old and the new value are NOT the same, then this is the updated column. If both the old and the new value are NULL, then this column was not updated.
    Before I apply a row-update to my destination table, I need to figure out the current value of those columns that have not been changed, and replace the NULL values with its current value. Otherwise, my row-update will replace with nulls those columns that its value has not been changed. This is where I am looking for an advise on best practices. Here are the possible 2 solutions I can come up with, unless you guys have a better suggestion on how to handle “updates”:
    About My Environment: My destination table(s) are part of a dimensional DW database. My only access to the source database is via Asynchronous Distributed HotLog mode. To build the datawarehouse, I will create initial mappings in OWB or ODI that will replicate the source tables into staging tables. Then, I will create another set of mappings to transform and load the data from the staging tables into the dimension tables.
    Solution #1: Use the staging tables as lookup tables when working with “updates”:
    1.     Create an exact copy of the source tables into a staging environment. This is going to be done with the initial mappings.
    2.     Once the initial DW database is built, keep the staging tables.
    3.     Create mappings to maintain the staging tables using as source the CDC tables.
    4.     The staging tables will always be in sync with the source tables.
    5.     In the dimension load mapping, “join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    6.     For “updates”, use the staging tables as lookup tables to get the current value of the column(s) that have not been changed.
    7.     Apply the updates in the dimension tables.
    Solution #2: Use the dimension tables as lookup tables when working with “updates”:
    1.     Delete the content of the staging tables once the initial datawarehouse database has been built.
    2.     Use the empty staging tables as a place to process the CDC records
    3.     Create mappings to insert CDC records into the staging tables.
    4.     The staging tables will only contain CDC records (i.e. new records, updated records, and deleted records)
    8.     In the dimension load mapping, “outer join” the staging tables, and identify “inserts”, “updates”, and “deletes”.
    5.     For “updates”, use the dimension tables as lookup tables to get the current value of a column(s) that has not been changed.
    6.     Apply the updates in the dimension tables.
    Solution #1 uses staging tables as lookup tables. It requires extra space to store copies of source tables in a staging environment, and the dimension load mappings may take longer to run because the staging tables may contain many records that may never change.
    Solution #2 uses the dimension tables as both the lookup tables as well as the destination tables for the “updates”. Notice that the dimension tables will be updated with the “updates” AFTER they are used as lookup tables.
    Any other approach that you guys may suggest? Do you see any other advantage or disadvantage against any of the above solutions?
    Any comments will be appreciated.
    Thanks.

    hi,
    can you please tell me how to make the JDBC call. I triedit as:
    1. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "jdbc:oracle:thin");
    and
    2. TopicConnectionFactory tc_fact = AQjmsFactory.getTopicConnectionFactory(host, SID, Integer.parseInt(port), "thin");
    -as given in http://www.acs.ilstu.edu/docs/oracle/server.101/b10785/jm_opers.htm#CIHJHHAD
    The 1st one is giving the error:
    Caused by: oracle.jms.AQjmsException: JMS-135: Driver jdbc:oracle:thin not supported
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:330)
    at oracle.jms.AQjmsTopicConnectionFactory.<init>(AQjmsTopicConnectionFactory.java:96)
    at oracle.jms.AQjmsFactory.getTopicConnectionFactory(AQjmsFactory.java:240)
    at com.ivy.jms.JMSTopicDequeueHandler.init(JMSTopicDequeueHandler.java:57)
    The 2nd one is erroring out:
    oracle.jms.AQjmsException: JMS-225: Invalid JDBC driver - OCI driver must be used for this operation
    at oracle.jms.AQjmsError.throwEx(AQjmsError.java:288)
    at oracle.jms.AQjmsConsumer.dequeue(AQjmsConsumer.java:1307)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:1028)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:951)
    at oracle.jms.AQjmsConsumer.receiveFromAQ(AQjmsConsumer.java:929)
    at oracle.jms.AQjmsConsumer.receive(AQjmsConsumer.java:781)
    at com.ivy.jms.JMSTopicDequeueHandler.receiveMessages(JMSTopicDequeueHandler.java:115)
    at com.ivy.jms.JMSManager.run(JMSManager.java:90)
    at java.lang.Thread.run(Thread.java:619)
    Is anything else beyond this is required??? please help. :(
    oracle: 10g R4
    linux environment and java is trying to do AQjmsFactory.getTopicConnectionFactory(...); Java machine is diffarent from the database and no oracle client is to be installed on java machine.
    The same code is working fine when i use oc8i instead of thin drivers and run it on db machine.
    ravi

  • Printing from SOA Suite 11.1.1.7 (Best Practice)

    Using SOA Suite 11.1.1.7
    looking for best practices around printing from within Oracle SOA. i have a PDF and we need to print the document to a specific print queue, we know the IP address of the printer also other details such as a name etc... these are held in a central database as eventually we want the user to be able to select from a list of printers, The printers also sit on Microsoft print servers.
    How would it be best to print from probably a BPEL process although looking for guidance so looking for all options.
    How has anyone else achieved this?
    Thanks

    Hi
    There is no specific 'web service to printer' option available in Oracle SOA Suite out of the box.
    Ultimately you're moving messages payloads around between web services, and using the adapters (jms, mq, db, mq, file, ftp etc) to move/manipulate those payloads to various endpoints on disparate systems.
    There has to be an interface to the printer which either you will have to create or the printer can poll network file/folder locations to pickup files to print.
    Maybe it has an http front end or can talk web service standards?
    HTH

  • Logical level in Fact tables - best practice

    Hi all,
    I am currently working on a complex OBIEE project/solution where I am going straight to the production tables, so the fact (and dimension) tables are pretty complex since I am using more sources in the logical tables to increase performance. Anyway, what I am many times struggling with is the Logical Levels (in Content tab) where the level of each dimension is to be set. In a star schema (one-to-many) this is pretty straight forward and easy to set up, but when the Business Model (and physical model) gets more complex I sometimes struggle with the aggregates - to get them work/appear with different dimensions. (Using the menu "More" - "Get levels" does not allways give the best solution......far from). I have some combinations of left- and right outer join as well, making it even more complicated for the BI server.
    For instance - I have about 10-12 different dimensions - should all of them allways be connected to each fact table? Either on Detail or Total level. I can see the use of the logical levels when using aggregate fact tables (on quarter, month etc.), but is it better just to skip the logical level setup when no aggregate tables are used? Sometimes it seems like that is the easiest approach...
    Does anyone have a best practice concerning this issue? I have googled for this but I haven't found anything good yet. Any ideas/articles are highly appreciated.

    Hi User,
    For instance - I have about 10-12 different dimensions - should all of them always be connected to each fact table? Either on Detail or Total level.It not necessary to connect to all dimensions completely based on the report that you are creating ,but as a best practice we should maintain all at Detail level only,when you are mentioning any join conditions in physical layer
    for example for the sales table if u want to report at ProductDimension.ProductnameLevel then u should use detail level else total level(at Product,employee level)
    Get Levels. (Available only for fact tables) Changes aggregation content. If joins do not exist between fact table sources and dimension table sources (for example, if the same physical table is in both sources), the aggregation content determined by the administration tool will not include the aggregation content of this dimension.
    Source admin guide(get level definition)
    thanks,
    Saichand.v

Maybe you are looking for

  • How do you move an access app from one sharepoint subsite to another?

    Hi there! I have created an access app for my 'Account management' subsite. This database stores lots of important information, however I'd like to move this to a new subsite which I've created called 'Campaigns'. How do I do this? I literally want t

  • Warning message in MIGO - Diference on Qty of entry/Qty in delivery note

    Hello, I want to set a warning (or even error) message when, in MIGO transaction, the quantity in the delivery note is higher than the quantity in unit of entry set up (both in the quantity screen). A message regarding the difference between the quan

  • Yoga 13 Win 7 Drivers

    Hi All, I have been trying for a long time to try and get Win7 (64bit) drivers installed but to no success for my wireless network on my Yoga 13. I know form reading on here that there have been a lot of people putting windows 7 on as dual boot or so

  • Duplicate SMS messages

    I have been looking to find a solution to a problem but have not been able to find. Perhaps you can help. I have connected my 3100 phone to PC Suite 6.8 (and previous versions). The problem is that upon synch thru File Manager, SMS messages continue

  • Best Way to Drop Large Clob Column?

    I have a very large partitioned table that contains XML documents stored in a clob column. Aside from the clob column there are several varchar and numeric columns in the table that are related to each document. We have decided to move the XML out of