Caching XQuery Result in OSB for Performance Improvement

Hi,
I have written a custom XPath function in Java to pick the IP address of the machine and registered the same with Oracle Service Bus 11g.
In the Proxy Message Flow, the server IP is picked at run time by using XQuery that refers to the Custom Function.
Now, is there a way to cache this result some where in OSB so that in the subsequent execution, the XQuery will pick the cached value from Cache instead of making a call to Custom Function registered with OSB.
I'm asking this question to understand if performance optimization is possible here by caching the result.
Thanks,
CC

Hi All,
Have implemented Java Callout and Custom XPath in the same flow and kept current datetime stamp before and after each of these steps to calculate the difference b/w timestamps thereby finding the execution times of Java Callout and Custom XPath steps (this may not be an elegant way of calculating the execution time of each step in OSB message flow, request someone to suggest the most recommended approach here!) ...
For a load of 10 users, each submitting 10 requests (total 100 requests), following are the execution times (in milliseconds) collected for 10 requests (Server is not occupied with any other activity at this time):
Java Callout(milliseconds) |     Custom XPath(milliseconds)
13 |     1
4 |     0
5 |     1
1 |     2
0 |     1
1 |      0
1 |     0
1 |     0
1 |     0
1 |     2
Where the value of '0' is appearing, I assume it could be some micro seconds. The difference happens to be zero in some cases as we don't have microsecond level logging.
Thanks,
CC
Edited by: Chandu on 19-Jul-2011 22:02

Similar Messages

  • XQuery Transformation in OSB for array of values

    Hi,
    I have followed the below tuts on the OSB tutorial.
    http://www.oracle.com/technetwork/articles/jumpstart-for-osb-development-page--097357.html
    Everything works fine. Except for the "GetAllCustomer" branch node.
    Step1 : Configured the BusinessService invoking the exposed WebService at localhost:7001 --> it works & returns proper values, testing through both SOAP Ui & OSB business service.
    Step 2 : Similarly configured the Proxy services with XQuery Transformation in place. --> it works with no error but not returning any values.
    Step 3 : after configuring XQuery Transformation tested it through OEPE --> returns result as expected.
    Please suggest where am going ??

    use a for loop or position for this..
    please check the below threads, for the same..
    Re: Assign activity erros with XPath query string returns multiple nodes.
    Re: OSB:for-each action working procedure with a sample.

  • Pls help me to modify the query for performance improvement

    Hi,
    I have the below initialization
    DECLARE @Active bit =1 ;
    Declare @id int
    SELECT @Active=CASE WHEN id=@id and [Rank] ='Good' then 0 else 1 END  FROM dbo.Students
    I have to change this query in such a way that the conditions id=@id and [Rank] ='Good' should go to the where condition of the query. In that case, how can i use Case statement to retrieve 1 or 0? Can you please help me to modify this initialization?

    I dont understand your query...May be below? or provide us sample data and your output...
    SELECT *  FROM dbo.students
    where @Active=CASE
    WHEN id=@id and rank ='Good' then 0 else 1 END
    But, I doubt you will have performance improvement here?
    Do you have index on id?
    If you are looking for getting the data for @ID with rank ='Good' then use the below:Make sure, you have index on id,rank combination.
    SELECT *  FROM dbo.students
    where  id=@id
    and rank ='Good' 

  • How application module helps for performance improve

    Hi Everyone,
    I have a sample web-application in which I am connecting with single AM instance (AM for database view object), retrieving some information and then close the connection. I am doing this as,
    // making AM instance
    <application module instance> = Configuration.createRootApplicationModule(<AM name>, config);
    // performing operations
    <operation result> = <application module instance>.<access VO with any operation>();
    System.out.println("Get result here");
    // disconnecting AM instance
    <application module instance>.getDBTransaction().disconnect();
    Configuration.releaseRootApplicationModule(<application module instance>, true);
    These are the activities which are performed by a single user. Now, I am doing stress test on same activities. I am testing the same code with 300 concurrent users (using JMeter with JSP URL). These are working fine. Also I checked multiple times, it always working fine.
    Now, I need to do something through which I can improve the performance. I know, I can use AM pool configurations to make this more effective. I have gone through the Oracle documents and checked the same test case with default or recommended pool configurations and I found similar kind of results (there is not much difference).
    On other hand, I tried with 'releaseRootApplicationModule' method with false parameter and found better results in default as well as recommended pool configurations.
    My question is, is the change of pool configurations recommended by Oracle really work? or do I need to concentrate more on coding part with default pool configurations?
    Here, I would like to know, what are the best practice (in code as well as pool configurations), I need to follow if I really want to improve the performance in real scenarios (when our application will access with large no. of concurrent users).
    I really look forward some help from experts. I have given a lot of time on this to know how really we can make our application more effective in terms of performance.
    I really appreciate for your reply.
    Regards,
    Dilip Gupta.

    >
    We added the createRootApplicationModule() API (in the oracle.jbo.client.Configuration class) to simplify acquiring an application module from the pool for brief programmatic manipulation before it is released back to the AM pool.
    Steve Muench.
    >
    check [url http://radio-weblogs.com/0118231/2009/08/20.html#a959]Check Your App for Misuse of Configuration.createRootApplicationModule()
    Edited by: Mohammad Jabr on May 10, 2012 7:14 AM

  • Index creation on a field in DSO for performance improvement of a select st

    Hi All,
    We have one DSO in Production which has around 48 million records.
    In a transformation of a cube, we have a select statement on the DSO with comparing 5 fields. And 4 of them are data fields in the DSO.before some time we change a structure of the DSO. We moved one info object from key field to data field which is being used in the select query.
    After this change, we do a full load to the DSO and then everything is running fine. Except the load to cube is getting stuck at the select statement. It is running for ever.
    Now I have the below doubts,
    1. Can this happen as one of the field which we are using in the select statement does not have indexes?
    2. If we want to create index on that field, then what would be the required steps?
    Thank you,
    Jaimin

    Hi Jaimin,
    Sorry ,i misunderstood the question.Request you to please explain the issue step by step once again.
    Rgds
    SVU123
    Edited by: svu123 on Feb 25, 2011 11:52 AM
    Edited by: svu123 on Feb 25, 2011 1:30 PM

  • Partitioning for Performance

    Hi All,
    Currently we have a STAR Schema with ORDER_FACT and ORDER_HEADER_DIM , ORDER_LINE_DIM, STORE_DIM, TIME_DIM and PRODUCT_DIM.
    We are planning to partition ORDER_FACT for performance improvements both reporting and loading. We have around 100 million rows in ORDER_FACT. Daily we inserted around 1 million rows and update around 2 million rows.
    We are trying to come up with some good stratgies and we have few questions..
    1) Our ORDER_FACT does not have any date columns except INSERT_DATE and LAST_UPDATE_DATE , more of timestamp columns. ORDER_DATE would be the appropriate one but we do not store it in fact. We have ORDER_DATE_KEY which is surrogatekey of TIME_DIM.
    Can a range partition (monthly) still be performed ? ( I quess we need a ORDER_DATE column in our fact )
    If somebody has handled this situation in some other way , any guidance will be helpful.
    2) Question below is assuming - we have a partitioned ORDER_FACT on ORDER_DATE.
    Currently we are doing a merge (Update/Insert) on ORDER_FACT. We have a incremental load (only newly inserted or updated rows from source) are processed.
    Update/Insert is slow.
    Can we use PEL (Partition Enabled loading ) and avoid merge (Update/Insert) ?
    PEL is fine for new rows , since it replaces empty partition in target with a loaded partition from source . How to handle updation and insertion of rows in partition which has existing rows?
    Any help on these would be helpful.
    Thanks,
    Samurai.

    Speaking from our experience, at some point you need to build your fact rows so you need an insert/update prior to PEL anyway, and you would need your partitions closely matched to your refresh frequency for it really to be effective.
    So what we have done is focus on the "E" part of ETL.
    Our remote source database is mirrored on our side via Streams. This mirrors into a local copy that we can run various reports/ processes/ queries against without impacting production.
    We also perform a custom aply that populates a second local copy of the tables, but these ones are partitioned daily and are used for our ETL. So, at the end of the day we have a partitioned set of data that contains only the current status of rows that have changed over the day. Now, of course, this is problematic for ETL because you need to have all of the associated information with those changes in order to do your ETL.
    (simple example, data in a customer's address record changes. Your ETL query undoubtably joins the customer record and the address record to build your customer dimension row. But Streams only propogates the changed address record so you wouldn't have the customer record in that daily partition for your join)
    So, we have a process that runs after the Streams aply is finished that walks the dependency tree and populates all dependant data into the current daily partition, so - at the end of our prep process we have a partitioned set of data that holds a complete set of source tables where anything has changed across any dependencies.
    This gives us a small, efficient daily data set to run our ETL queries against.
    The final piece of the puzzle is that we access this segment via synonyms, and the synonyms are pointed at this day's partition. We have a control structure that manages the list of partitions and repoints the synonyms prior to running the ETL. The partition loading and the ETL synonym pointing are completely decoupled so, for example, if we ever needed to suspend our ETL to get a code fix in place we can let the partition loading move ahead for a day or two and then play catchup loading the partitions in sequence and be confident that we have each end-of-day picture there to use.
    By running our ETL against only the changed data, we acheive huge efficiencies in query performance. And by managing the ETL partitions, we don't incur the space costs of a second full copy of the source as we prune out the partitions once we are satisfied with the load at the end of a month (with full backups of course in case there is ever a huge problem to go back and correct).
    Now for facts, of course, we expect these to be insert only. Facts shouldn't change. For dimensions we use set based fail over to row based (target only), with a couple specified to be Row Based Target Only as they are simply too large to ever complete in Set Based Mode.
    Yes, this is a bit of a convoluted process - exacerbated by our need to have a full local copy for some reporting needs and the partitioned change copy for the datamart ETL, but at the end of the day it all works and works well with properly designed control mechanisms.
    Cheers,
    Mike

  • Webi performance improvement

    Hi All, can anyone share your experience on the steps that you did for performance improvement when webi documents are opening on the CRM portal. we are trying to display the data for customer and we have integrated 3 webis on the CRM portal. The back end is HANA. For few big customers it takes a lot of time and there is a timeout in the portal. Can any steps be taken from the webi end to improve the performance?

    Optimize the query in WEBI.
    When you run the WEBI, in launchpad , how much time does it take?
    You can also limit the number of rows returned in Universe.

  • How to tune this query for the improve performance ?

    Hi All,
    How to tune this query for the improve performance ?
    select a.claim_number,a.pay_cd,a.claim_occurrence_number,
    case
    when sum(case
    when a.payment_status_cd ='0'
    then a.payment_est_amt
    else 0
    end
    )=0
    then 0
    else (sum(case
    when a.payment_status_cd='0'and a.payment_est_amt > 0
    then a.payment_est_amt
    else 0
    end)
    - sum(case
    when a.payment_status_cd<>'0'
    then a.payment_amt
    else 0
    end))
    end as estimate
    from ins_claim_payment a
    where a.as_of_date between '31-jan-03' and '30-aug-06'
    and ( a.data_source = '25' or (a.data_source between '27' and '29'))
    and substr(a.pay_cd,1,1) IN ('2','3','4','8','9')
    group by a.claim_number, a.pay_cd, a.claim_occurrence_number
    Thank you,
    Mcka

    Mcka
    As well as EXPLAIN PLAN, let us know what proportion of rows are visited by this query. It may be that it is not using a full table scan when it should (or vice versa).
    And of course we'd need to know what indexes are available, and how selective they are for the predicated you have in this query ...
    Regards Nigel

  • Performance problem...is there a way to cache query results?

    Greetings team,
    I've been deploying DS5.2 for a while now, and am on the cusp of pushing it into our production environment, however I've been noticing lately that some hosts are taking an exorbitantly long time to log in (actually, a user noted it, and I'm now investigating).
    Logins to hosts in this environment can take anywhere from 10-50 seconds. One thing that I've noticed is that any time you run a command that requires any amount of awareness of uid->username translation (i.e. if you ls -l /opt/home), queries are made to the configured directory server for this information. Is this normal? Since uid's and usernames don't often change (in most environments, anyway), is there a way this could be cached?
    I see also in my access log for my primary server (configured as a hub, btw) that there is near constant traffic to that host for LDAP info. I'm not sure why it's so chatty, but it does appear to be slowing things down a bit. The load on my LDAP host (a SunFire V210 w/ 1GHz processor, 1024MB RAM) seems to float between 1 and 12, with sar reporting an average idle time of about 44%.
    Any ideas? I'm really at a loss to explain why there's so much traffic to this host when much of it seems to come from hosts with nobody logged into them.
    Patrick

    It is great that you have found the root cause of
    your issue.
    nscd is by default started at boottime by a usual OS
    install. There is a /etc/nscd.conf but I doubt that
    anyone will change anything there as the default
    settings are good for most cases.
    I think LDAP search performance is affected by the
    existence of Search Indexes also.
    I have observed that if the user home directory is
    NFS mounted especially over a WAN, be it via
    /etc/fstab or automount maps, the login process will
    be very slow, it will take a while to obtain a
    command prompt at the home directory level.
    GaryGary et al,
    In my environment nscd has been explicitly disabled for some historical reasons, none of which are still a problem. So, I'm going to enable it for only passwd and group caching, with the default values for those caches.
    I'm in the process of working out my performance tuning plan for my LDAP servers, but I'm definitely going to have an eye on indices and caches. Those will probably have the least impact on search times and such for the moment since my directory is so tiny (261 entries!), but preventing that traffic from hitting the server at all will be a huge savings.
    I can definitely see why WAN mounted homedirs would cause things to lag. That's not the case here since NFS is a big no-no.
    Patrick

  • DMA Performance Improvements for TIO-based Devices

    Hello!
    DMA Performance Improvements for TIO-based Devices
    http://digital.ni.com/public.nsf/websearch/1B64310FAE9007C086256A1D006D9BBF
    Can I apply the procedure to NI-DAQmx 9? These ini-files dont seem to exist anymore in the newer version.
    Best, Viktor

    Hi Viktor,
    this page is 7 years old and doesn't apply to the DAQmx.
    Regards, Stephan

  • Performance improvement in OBIEE 11.1.1.5

    Hi all,
    In OBIEE 11.1.1.5 reports takes long time to load , Kindly provide me some performance improvement guides.
    Thanks,
    Haree.

    Hi Haree,
    Steps to improve the performance.
    1. implement caching mechanism
    2. use aggregates
    3. use aggregate navigation
    4. limit the number of initialisation blocks
    5. turn off logging
    6. carry out calculations in database
    7. use materialized views if possible
    8. use database hints
    9. alter the NQSONFIG.ini parameters
    Note:calculate all the aggregates in the Repository it self and Create a Fast Refresh for MV(Materialized views).
    and you can also do one thing you can schedule an IBOT to run the report every 1 hour or some thing so that the report data will be cached and when the user runs the report the BI Server extracts the data from Cache
    This is the latest version for OBIEE11g.
    http://blogs.oracle.com/pa/resource/Oracle_OBIEE_Tuning_Guide.pdf
    Report level:
    1. Enable cache -- change nqsconfig instead of NO change to YES.
    2. GO--> Physical layer --> right click table--> properties --> check cacheable.
    3. Try to implement Aggregate mechanism.
    4.Create Index/Partition in Database level.
    There are multiple other ways to fine tune reports from OBIEE side itself:
    1) You can check for your measures granularity in reports and have level base measures created in RPD using OBIEE utility.
    http://www.rittmanmead.com/2007/10/using-the-obiee-aggregate-persistence-wizard/
    This will pick your aggr tables and not detailed tables.
    2) You can use Caching Seeding options. Using ibot or Using NQCMD command utility
    http://www.artofbi.com/index.php/2010/03/obiee-ibots-obi-caching-strategy-with-seeding-cache/
    http://satyaobieesolutions.blogspot.in/2012/07/different-to-manage-cache-in-obiee-one.html
    OR
    http://hiteshbiblog.blogspot.com/2010/08/obiee-schedule-purge-and-re-build-of.html
    Using one of the above 2 methods, you can fine tune your reports and reduce the query time.
    Also, on a safer side, just take the physical SQL from log and run it directly on DB to see the time taken and check for the explain plan with the help of a DBA.
    Hope this help's
    Thanks,
    Satya
    Edited by: Satya Ranki Reddy on Aug 12, 2012 7:39 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:12 PM
    Edited by: Satya Ranki Reddy on Aug 12, 2012 8:20 PM

  • Will there performance improvement over separate tables vs single table with multiple partitions?

    Will there performance improvement over separate tables vs single table with multiple partitions? Is advisable to have separate tables than having a single big table with partitions? Can we expect same performance having single big table with partitions? What is the recommendation approach in HANA?

    Suren,
    first off a friendly reminder: SCN is a public forum and for you as an SAP employee there are multiple internal forums/communities/JAM groups available. You may want to consider this.
    Concerning your question:
    You didn't tell us what you want to do with your table or your set of tables.
    As tables are not only storage units but usually bear semantics - read: if data is stored in one table it means something else than the same data in a different table - partitioned tables cannot simply be substituted by multiple tables.
    Looked at it on a storage technology level, table partitions are practically the same as tables. Each partition has got its own delta store & can be loaded and displaced to/from memory independent from the others.
    Generally speaking there shouldn't be too many performance differences between a partitioned table and multiple tables.
    However, when dealing with partitioned tables, the additional step of determining the partition to work on is always required. If computing the result of the partitioning function takes a major share in your total runtime (which is unlikely) then partitioned tables could have a negative performance impact.
    Having said this: as with all performance related questions, to get a conclusive answer you need to measure the times required for both alternatives.
    - Lars

  • Caching XSLT results?

    Hi,
    I am a newbie to XML/XSLT, and we have just written our first XSL stylesheet.
    I am using the WebLogic JSP Tag Library to do the transformation from XML to HTML
    and all seems to work well. My question is this: Does WebLogic cache the HTML
    somewhere so that the transformation does not need to be reperformed each time
    the page is requested by the user? It sounds like an obvious thing to do, but
    I'm not sure if this functionality is included in WebLogic. If it is not, are
    there others out there who have this same need (to cache XSLT results for performance
    reasons) and how have you dealt with this issue. Any help would be appreciated.
    I'm running WLS 6.0 SP2 on HP-UX 11.x. Thanks...
    Vasuki.

    I tried this example based on a view:
    CREATE MATERIALIZED VIEW MV_TEST2
         REFRESH COMPLETE
         START WITH SYSDATE
         NEXT  SYSDATE + 1/48
         WITH ROWID
         AS SELECT * FROM test1;REFRESH COMPLETE -- The complete refresh re-creates the entire materialized view.
    START WITH SYSDATE -- run now
    NEXT SYSDATE + 1/48 -- run again in half an hour
    WITH ROWID -- I think this option is important if you use partial refresh of the view.
    AS SELECT * FROM test1; -- test1 is a view:
    CREATE OR REPLACE VIEW TEST1 AS
    SELECT st_id, st_name
        FROM aaw_solution_tree;Are column indexes still possible? I'm not sure:
    Indexing: !with respect to MV's on 10gR2 Jonathan Lewis wrote! ... you are allowed to create indexes on the tables that sit under materialized views - just don't make them unique indexes
    How much freedom is there in setting the refresh rate?
    What type of refreshing do you need?
    Another useful link: [http://asktom.oracle.com/pls/ask/search?p_string=materialized+view|http://asktom.oracle.com/pls/ask/search?p_string=materialized+view]
    Hope it helps.
    Tobias

  • Performance improve using TEZ/HIVE

    Hi,
    I’m newbie in HDInsight. Sorry for asking simple Questions. I have queries around performance improvement of my HIVE query on File data of 90 GB (15 GB * 6).
    We have enabled execution engine has TEZ, I heard the AVRO format improves the speed of execution, Is AVRO SERDE enabled TEZ Queries or do I need upload *.jar files to WASB. I’m using latest version. Any sample Query.
    In TEZ, Will ORC Column Format and Avro compression can work together, when we set ORC compression level on hive has
    Snappy and LZO ?. Is there any Limitation of Number of columns for ORC tables.
    Is there any best compression technique to upload data file to Blob, I mean compress and upload.  I used *.gz, which compressed by 1/4<sup>th</sup> of File Size and upload to Blob, but problem *.gz is not split able and it will always
    uses less (single ) Mapper or should I use Avro with Snappy Compression . Is the Microsoft Avro Library performs snappy Compression or is there any compress which can be  split and compress.
    If data structure for file change over time , will there be necessity of reloading older data?. Can existing query works without change in code.
    It has been said that TEZ has Real Time Reporting capability , but when I Query 90 GB file (It includes Group By, order by clauses) is taking almost 8 mins of time on 20 nodes, are there any pointers to improve performance further and get the Query result
    in Seconds.
    Mahender

    -- Tez is an execution engine, I don't think you need any additional jar file to get AVRO Serde working on Hive when Tez is used.  You can used  AvroSerDe, AvroContainerInputFormat & AvroContainerOutputFormat to get AVRO working when tez is
    used.
    -- I tried creating a table with about 220 columns, although the table was empty, I was able to query from the table, how many columns does your table hold?
    CREATE EXTERNAL TABLE LargColumnTable02(t1 string,.... t220 string)
    PARTITIONED BY(EventDate string) ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' STORED AS ORC LOCATION '/data'
    tblproperties("orc.compress"="SNAPPY");
    --  You can refer
    http://dennyglee.com/2013/03/12/using-avro-with-hdinsight-on-azure-at-343-industries/   
    Getting Avro data into Azure Blob Storage Section
    -- It depends on what data has change , if you are using Hadoop, HBase etc..
    -- You will have to monitor your application check node manager logs if there is any pause in execution again. It depends on what you are doing, would suggest open a Support case to investigate further.

  • Cannot query using both conforming and cached query result

    TopLink doesn't allow me to both use conforming and cached query result at the same time.
    Conforming is certainly not a superset of the [cached query result] features.
    Can you confirm that it's a limitation of TopLink?
    Any know workaround to end-up with the same features as using both conforming and cached query result?
    Conforming is about seeing modifications you do in the same transaction. As a bonus, if you query for one object and specify at least the id as criteria because TopLink will have to check in memory anyway it can avoid going to the database.
    But if I do a query like "give me employees hired before now and after 30 days ago" it's about more than one objects and about finding existance so cached query result is needed to get acceptable performance in a complex application trying to avoid the same SQL generated over and over again.

    Thats where the trace just ends? It doesnt look like there's any LIKE or filtering going on (with respect to the Oracle pieces anyway), apparently MSAccess simply requested the whole table.
    What do you mean by 'hang' exactly? Are you sure it's just not taking a long time to complete? How long have you waited? How fast does it complete on the other environment?
    ODBC tracing isnt likely to help much for that. SQLNet tracing would be better to see what is going on at a lower level. Specifically, what is going on at the network level? Is the client waiting for a packet to be returned from the database?
    Is the database having a hard time processing the query, perhaps due to index/tuning issues?
    Assuming that is indeed the query that is "hung", how much data does that return?
    Are you able to reproduce the same behavior with that query and vbscript for example?
    Greg

Maybe you are looking for

  • Custom report row template

    Few questions when using a custom report row template. Followup to the discussion Report Row Template: Column condition 1. The row template allows full control how the entire row is rendered. I can see this being used when the report query returns a

  • Boot camp black screen with blip

    So i was installing windows 7using boot camp, everything was going well  intell i got to pick which drive to install. it stated installing the  files then said files cant be loaded. I tryed to repeat the process but  got the same message. So i restar

  • Apache x86 wont start with CF9.01 x64 on win2k3 x64. wsconfig question.

    Hey, I'm running a virtual windows 2003 x64 with Apache 2.2.16 (x86) and i want to connect my apache to cf9 with the wsconfig tool. It places a mod_jrun22.so in my /lib/wsconfig/1/ folder and changes my httpd.conf. That works just fine. But when i st

  • [SOLVED] Samba cannot access shares after upgrade

    After upgrading samba and smbclient from 4.0.6-1 to 4.0.7-3 I can no longer connect to shares or authenticate with server in general. Cannot establish new connections. Already connected ones still works. Downgrading samba to 4.0.6-1 does not change t

  • Breaking Down Classes Into Files

    I'd like to know what's considered best practice when dividing multiple classes into seperate files. I know you can have only one public class per file, and I've heard some people say it's best to have one class per file, period. But the one-class-pe