Poor performance in Repository mapping test tool (7.0.12)

Hi,
I'm just looking for some hints for performance tuning the Mapping tool in the Repository. This is our Development machine.
Seems to be significantly worse since upgrade from sps 10 to sps 12.
Every "display queue" / recompile of the map takes almost 5 minutes.
The Java stack currently has about 4gb of heap, abap 1gb.  We reallocated a large amount from abap to java which temporarily alleviated the problem (about 60% quicker), but it's back after a few days.  There is currently no possibility of adding more memory.
The problem seems to be noticable primarily with the IB, hence I'm looking for any performance tuning tips that would directly affect this.  I'm thinking that we need to concentrate on the Java stack.  I've looked at note 894509 and SAP NetWeaver Process Integration Tuning Guide and not seen anything directly related at first glance.
I've deactivated end-to-end monitoring.
The only thing that stands out is a significant amount of activity on the disk with the sap/oracledb/oracleexec on it.  It's about 9 times greater than the amount of swap.  Two processes seem to be regularly at the top of CPU usage dw.sapPID_DVEBMGS14 and /usr/sap/PID/DVEBMG.
Hints and tips appreciated.  As I said, hardware changes are not possible at this time.
Thanks
James.
Yes, points will be awarded where appropriate.

Are you gacing this issue only when you  test the mapping in the IR or even in the runtime?
I have also noticed that mapping test in the tool in IR always take a long time, but as they never reflect the runtime mapping perfromance I have always ignored it.
A few reasons might be,
1. Low RAM on the Machine being used by you.
2. Too Many applications!
Remember you are Connecting to the server from your IR using JWS and so the resources on your machine also matter quite a bit.
If completely off track , ignore the reply.
Regards
Bhavesh

Similar Messages

  • How to perform an XSLT Mapping Test

    Hi,
    I have defined an XSLT Mapping in XI and it seems to work, but when I perform a test in my Interface Mapping the source fields are not written on respective target fields. What is the problem?
    Thanks!

    The issue is in the logic f your XSL mapping.
    Use the same source XML and use it with XML Spy to debug why it is not working as you want it to.
    Regards
    Bhavesh

  • PI 7.0 SP12 - Mapping test tool and activation very long response

    Hi all I have performance issue only in IR on both actions :
    - displaying queues (debugging mapping)
    - activating mapping objects
    For all others things (runtime...), server is ok and fast...
    Do you have an idea ?
    Thanks a lot

    Hi,
    Please have look on below few articles,
    Decrease of performance in local machine when Message Mapping opened.
    SAP NetWeaver XI Performance Analysis - Webinar Powerpoint
    https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/489f5844-0c01-0010-79be-acc3b52250fd
    thanks
    Swarup

  • Forte 6U2 - poor performance of std::map::operator[]

    I discovered to my horror this morning that Forte 6U2 implementation of std::map (at least at the patch level I'm working with) ALWAYS default constructs a value_type when operator[] is invoked - even if value_type already exists in the map.
    Any reasonable STL implementation first uses lower bound to search, and failing to find an appropriate element, uses the resulting iterator as an insertion hint, only then default constructing a value_type (and therefore a mapped_type) element.
    The existing implementation in Forte 6U2 imposes an absurd penalty on the use of the subscript operator when mapped_type's default construction semantics are non-trivial. While "correct" in the sense that the contract for std::map::operator[] is met, I cannot imagine that this was an intentional design choice given the performance implications. How can I get this issue addressed?

    Well, I wasn't making any claim about the efficiency of the underlying tree representation - only the fact that operator[] unconditionally default constructs a mapped_type object, even if there is already an appropriate element in the map, which is no good at all. I'll try your request for enhancement thing though...
    While I would very much like to switch to using STLport vs. the RW stl, a compiler upgrade is not the answer here.

  • Load testing tools for shrepoint 2013

    Can anyone give detailed comparison of load testing tools for sharepoint 2013
    Nishanthi N

    Hi,
    Please check the following urls
    http://www.sharepointcolumn.com/sharepoint-performance-testing-tools/
    http://sharepointdragons.com/2012/12/26/the-great-free-performance-load-and-stress-testing-tools-that-can-be-used-with-sharepoint-verdict/
    Please Mark it as answer if this reply helps you in resolving the issue,It will help other users facing similar problem

  • Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running google maps app on the phone. Siri cannot seem to get me to a specific address. Where does the problem lie? Thanks.

    Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running Google Maps app on the phone. SIRI cannot seem to get me to a specific address. Where does the problem lie? Also can anyone tell me the hierarchy of use between the Apple Maps, SIRI, and Google maps when the app is on the phone? How do you choose one over the other as the default map usage? Or better still how do you suppress SIRI from using the Apple maps app when requesting a "go to"?
    I have placed an address location into the CONTACTS list and when I ask SIRI to "take me there" it found a TOTALLY different location in the metro area with the same street name. I have included the address, the quadrant, (NE) and the ZIP code into the CONTACTS list. As it turns out, no amount of canceling the trip or relocating the address in the CONTACTS list line would prevent SIRI from taking me to this bogus location. FINALLY I typed in Northeast for NE in the CONTACTS list (NE being the accepted method of defining the USPS location quadrant) , canceled the current map route and it finally found the correct address. This problem would normally not demand such a response from me to have it fixed but the address is one of a hospital in the center of town and this hospital HAS a branch location in a similar part of town (NOT the original address SIRI was trying to take me to). This screw up could be dangerous if not catastrophic to someone who was looking for a hospital location fast and did not know of these two similar locations. After all the whole POINT of directions is not just whimsical pasttime or convenience. In a pinch people need to rely on this function. OR, are my expectations set too high? 
    How does the iPhone select between one app or the other (Apple Maps or Gppgle Maps) as it relates to SIRI finding and showing a map route?  
    Why does SIRI return an address that is NOT the correct address nor is the returned location in the requested ZIP code?
    Is there a known bug in the CONTACTS list that demands the USPS quadrant ID be spelled out, as opposed to abreviated, to permit SIRI to do its routing?
    Thanks for any clarification on these matters.

    siri will only use apple maps, this cannot be changed. you could try google voice in the google app.

  • Forms and Reports: Automated Test tools - functionality AND performance

    All,
    I'm looking to get a few leads on an automated test tools that may be used to validate Oracle forms and reports (see my software configuration below). I'm looking for tools that can automate both functional tests and performance. By this I mean;
    Functional Testing:
    * Use of shortcut keys
    * Navigation between fields
    * Screen organisation (filed locations)
    * Exercise forms validation (bad input values)
    * Provide values to forms and simulate user commit, and go and verify database state is as expected
    Performance Testing:
    * carry out tests for fixed user load
    * carry out tests for scaled step increase in user load
    * automated collection of log files and metrics during test
    So far I have:
    http://www.neotys.com/
    Thanks in advance for your response.
    Mathew Butler
    Configuration:
    Red Hat Enterprise Linux x86-64 architecture v4.5 64 bit
    Oracle Application Server 10.1.2.0.2 ( with patch 10.1.2.3 )
    Oracle Developer Suite (Oracle Forms and Reports) V10.1.2.0.2 ( with patch 10.1.2.3 )
    Oracle JInitiator 1.3.1.17 or later
    Microsoft Internet Explorer 6

    are there any tools for doing this activity like oracle recommended tools?
    Your question is unclear.  As IK mentioned, the only tool you need is a new version of Oracle Forms/Reports.  Open your v10 modules in a v11 Builder and select Save.  You now have a v11 module.  Doing a "Compile All PL/SQL" before saving is a good idea, but not required.  The Builders and utilites provided with the version 11 installation are the only supported tools for upgrading your application.  If you are trying to do the conversion of many Forms files in a scripted manner, you can use the Forms compiler in a script.  Generating new "X" files will also update the source modules (fmb, mmb, pll).  See MyOracleSupport Note 955143.1
    Also included in the installation in the Forms Migration Assistant.  Although it is more useful to people coming from older versions, it can also be used to move from v10 to 11.  It allows you to select more than one file at a time.  Documentation for this utility can be found in the Forms Upgrade Guide.
    Using the Oracle Forms Migration Assistant

  • Database Performance Testing Tool

    Hi Gurus,
    Can anyone suggest me some Performance Testing Tools with respect to Database Environment?
    The Tools in the Open Source Environment would be preferable.
    Thanks in advance.
    ~Anup

    Hi Anup,
    There's a tool called Orion in OTN page that's used to simulate database activity I/O. Try it!
    Regards,
    Jonathan Ferreira - Brazil
    http://www.ebs11i.com.br

  • PDA performance testing tool

    I saw in "Developing Mobile Applications using SAP NetWeaver Mobile" book published by SAP press that SAP provides the Mobile Client Benchmarking Tool to test PDA performance. However, I could not find such tool anywhere. Does this tool really exist? if yes, where can I get it?
    Otherwise, does anyone know some tool(s) that can be used for performance testing for NetWeaver mobile applications ( offline type) running on PDAs?
    Regards,
    Edited by: Hortense Andrian on Jan 21, 2011 1:07 AM

    Hi David,
    We need to do performance tuning of the application by means of adopting any test tools. On refering forum,I found various tuning strategies available to tune SQL queries.But, iam more concerned on availability of any performance test tools to do more deeper tuning of application.
    Regards,
    Mahesh.S

  • SAP provides Load testing/Performance testing tool

    Kindly suggest any Load testing tool which is provided by SAP itself.
    *Note to author of this question: I have taken the liberty of moving this to the proper thread
    cheers, Marilyn

    Hi Swapan,
    I would be glad to know if you can give me a step by step screenshot document for a Loadtest on SAP Application ( Any Module).
    Y i am asking you all this,.. well, downloading and installing a loadrunner on a desktop/standallone machine is very simple,.. but when it comes to Network environment where you have Controller installed on one machine and Load Generators on Another machine.. and Diagnostics installed on ( i donno where it will be installed ) user/server machine.. it is really difficult to imagine/assume/picturize the whole scenerio by taking x as example..
    I would be really glad and thankfull if someone can let us know, how to quick start a project..
    I have gone thru the Documentation " How to Perform SAP EP Load Testing.." good enough to understand but it would be more good if someone have articulated with interactive screenshots..
    Ok fine, now my next question is , how do we go with SAP GUI protocol..
    can someone give me an example with some interactive screenshots..
    If someone is working in SAP. then you may please contribute your knowledge by all means.. like https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/webcontent/uuid/ba95531a-0e01-0010-5e9b-891fc040a66c [original link is broken]
    This is a very beautiful video example on SAP Bex reports,.. with video and Voice..
    Recorded by using Camtasia Studio software , which will record you desktop,.. how you do , while you explain in voice..
    Hope someone comes up with a nice video presentation on SAP LOADRUNNER..
    Can you please show me /upload any document with some interactive screenshots on configuring Loadrunner with SAP and testing with SAP Gui protocol ,..on any one module of SAP.. either it be SD, MM, or APO..
    Infact as of now, i am in an urgent need of a sample scenerio of Loadtest by using Mercury Loadrunner ( SAP GUI Protocol) on any SAP Module with some interactive screenshots,...
    I appreciate your quick response..
    Will award maximum points.
    Please help me.. by mailing any document with some sample scenerio's step by step to my mail id: [email protected].
    Thanks
    Vinni..

  • XI Performance Testing Tool

    Hi All,
    Are there any performance testing tools for XI. Basically the tool should be able to send messages of arbitrary sizes and large number of messages one after the other.
    Thanks,
    Sandeep

    hi,
    try mercury loadrunner
    a great tool for testing XI
    and it you can have a trial edition
    http://www.mercury.com/us/products/loadrunner/
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

  • Poor performance of maps

    Does anyone know of any configuration parameters that are really useful when attempting to tweak the performance of the maps in OBIEE 11.1.1.5? Although I'm the only user on a virtual box with 4GB of RAM and 2 CPUs, map rendering is painfully slow. (XML posted below, working against the "A - Sample Sales" subject area). All other OBIEE performance is quite snappy. Maps ... not so much.
    Any ideas?
    <saw:report xmlns:saw="com.siebel.analytics.web/report/v1.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlVersion="201008230" xmlns:sawx="com.siebel.analytics.web/expression/v1.1">
    <saw:criteria xsi:type="saw:simpleCriteria" subjectArea="&quot;A - Sample Sales&quot;">
    <saw:columns>
    <saw:column xsi:type="saw:regularColumn" columnID="c3131dd22bf3cc8ee">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">"Time"."T05 Per Name Year"</sawx:expr></saw:columnFormula></saw:column>
    <saw:column xsi:type="saw:regularColumn" columnID="cddcbd415b4c3bdfa">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">"Ship To Geo Codes"."R62 Geo Ctry State Name"</sawx:expr></saw:columnFormula></saw:column>
    <saw:column xsi:type="saw:regularColumn" columnID="c371b413a6d22de6a">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">"Products"."P4 Brand"</sawx:expr></saw:columnFormula></saw:column>
    <saw:column xsi:type="saw:regularColumn" columnID="c95ae97ecd00f14f7">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">"Base Facts"."1- Revenue"</sawx:expr></saw:columnFormula></saw:column>
    <saw:column xsi:type="saw:regularColumn" columnID="c229fa547a7d1b4cd">
    <saw:columnFormula>
    <sawx:expr xsi:type="sawx:sqlExpression">"Base Facts"."2- Billed Quantity"</sawx:expr></saw:columnFormula></saw:column></saw:columns>
    <saw:filter>
    <sawx:expr xsi:type="sawx:comparison" op="equal">
    <sawx:expr xsi:type="sawx:sqlExpression">"Ship To Geo Codes"."R61 Geo Country Code"</sawx:expr>
    <sawx:expr xsi:type="xsd:string">USA</sawx:expr></sawx:expr></saw:filter></saw:criteria>
    <saw:views currentView="3">
    <saw:view xsi:type="saw:compoundView" name="compoundView!1">
    <saw:cvTable>
    <saw:cvRow>
    <saw:cvCell viewName="titleView!1">
    <saw:displayFormat>
    <saw:formatSpec/></saw:displayFormat></saw:cvCell></saw:cvRow>
    <saw:cvRow>
    <saw:cvCell viewName="tableView!1">
    <saw:displayFormat>
    <saw:formatSpec/></saw:displayFormat></saw:cvCell></saw:cvRow></saw:cvTable></saw:view>
    <saw:view xsi:type="saw:titleView" name="titleView!1"/>
    <saw:view xsi:type="saw:tableView" name="tableView!1">
    <saw:edges>
    <saw:edge axis="page" showColumnHeader="true"/>
    <saw:edge axis="section"/>
    <saw:edge axis="row" showColumnHeader="true">
    <saw:edgeLayers>
    <saw:edgeLayer type="column" columnID="c3131dd22bf3cc8ee"/>
    <saw:edgeLayer type="column" columnID="cddcbd415b4c3bdfa"/>
    <saw:edgeLayer type="column" columnID="c371b413a6d22de6a"/>
    <saw:edgeLayer type="column" columnID="c95ae97ecd00f14f7"/>
    <saw:edgeLayer type="column" columnID="c229fa547a7d1b4cd"/></saw:edgeLayers></saw:edge>
    <saw:edge axis="column"/></saw:edges></saw:view>
    <saw:view xsi:type="saw:mapview" name="mapview!1">
    <saw:mapLayout>
    <saw:splitterLayout orientation="horizontal"/></saw:mapLayout>
    <saw:basemap sid="__OBIEE__MAPVIEW__TILE__OBIEE_NAVTEQ_SAMPLE__OBIEE_WORLD_MAP__~v0"/>
    <saw:mapWidgets>
    <saw:mapToolBar>
    <saw:panTools>
    <saw:panHand display="true"/></saw:panTools>
    <saw:zoomTools>
    <saw:zoomIn display="true"/>
    <saw:zoomOut display="true"/></saw:zoomTools>
    <saw:selectionTools>
    <saw:pointTool display="true"/></saw:selectionTools></saw:mapToolBar>
    <saw:mapInformation>
    <saw:scaleInfo display="true"/>
    <saw:overview display="true" viewState="collapsed"/></saw:mapInformation>
    <saw:mapOverlay>
    <saw:panButtons display="true"/>
    <saw:zoomSlider display="true"/></saw:mapOverlay></saw:mapWidgets>
    <saw:infoLegend display="true" viewState="collapsed"/>
    <saw:viewportInfo>
    <saw:boundingLayer layerID="l0"/>
    <saw:mapCenter x="-95.90625" y="34.3125" size="31.015625" srid="8307" zoomLevel="3" xUnitPixels="12.8" yUnitPixels="12.8"/>
    <saw:boundingBox coords="-119.34375,18.8828125,-72.546875,49.8203125"/></saw:viewportInfo>
    <saw:spatialLayers>
    <saw:spatialLayer sid="__OBIEE__MAPVIEW__LAYER__OBIEE_NAVTEQ_Sample__OBIEE_STATE__~v0" class="omv_predefined_layer" layerID="l0">
    <saw:layerLabelFormat display="true"/>
    <saw:spatials>
    <saw:columnRef columnID="cddcbd415b4c3bdfa"/></saw:spatials>
    <saw:visuals>
    <saw:visual visualID="v0" xsi:type="saw:colorScheme">
    <saw:varyFillColor binType="percentile" numBins="4">
    <saw:columnRef columnID="c95ae97ecd00f14f7"/>
    <saw:rampStyle>
    <saw:rampItem id="0">
    <saw:g class="color" fill="#fffeff"/></saw:rampItem>
    <saw:rampItem id="1">
    <saw:g class="color" fill="#eae6ec"/></saw:rampItem>
    <saw:rampItem id="2">
    <saw:g class="color" fill="#d2cfd5"/></saw:rampItem>
    <saw:rampItem id="3">
    <saw:g class="color" fill="#bcb8be"/></saw:rampItem></saw:rampStyle></saw:varyFillColor>
    <saw:g class="color"/></saw:visual></saw:visuals></saw:spatialLayer></saw:spatialLayers>
    <saw:canvasFormat width="800" height="400"/>
    <saw:mapInteraction autoCreateFormats="true"/>
    <saw:formatPanel width="218" height="513"/></saw:view></saw:views></saw:report>

    Here are few tuning tips you might want to look at.
    http://oraclemaps.blogspot.in/2010/01/mapviewer-performance-tuning-tips-part.html
    http://oraclemaps.blogspot.in/2010/01/mapviewer-performance-tuning-tips-part_14.html
    http://oraclemaps.blogspot.in/2010/02/mapviewer-performance-tuning-tips-part.html
    http://oraclemaps.blogspot.in/2010/09/mapviewer-performance-tuning-tips-part.html

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Poor Performance in ETL SCD Load

    Hi gurus,
    We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
    Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
    Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
    The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
    The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
    Some questions for the above:
    - is this an expected average execution duration for this number of records?
    - updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
    Thanks in advance,
    Guilherme

    Hi,
    Thank you for posting in Windows Server Forum.
    Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
    RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
    http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

  • Is it possible to use external testing tools in SAP XI

    Hi,
    I have been working on file to file transfer of large size more than 1 GB. I have many doubts like:
    1.Can I use Data compression techniques so that my files can be processsed(my scenario involves a lot of mapping)
    2.Can any of the engines be stopped so that processing speed increases.
    3.Can any of the third party testing tools  like Load runner be used in to monitor the process.
    Please help me with sloutions.
    Thanks in advance.

    One GB is a very huge file. First of all check ur XI hardware configuration.
    1. U may write ur adapter module to zip the message.
    2. It wont affect the performance much if the file size is 1 GB.  Also u cant stop adapter engine (File Adapter) and the Integration Engine in ur scenario
    3. U can do the load testing on XI using load runner
    Regards,
    Prateek

Maybe you are looking for

  • How do i import my previous website into iWeb?

    I had previously designed my website in iWeb and published it, i then had to wipe my hard drive but backed everything up, now i can't seem to open or import my old website back into iweb, any help please?

  • Need of a "convenience key"-like feature for the audio jack

    This should be a very simple OS patch for the RIM programmers. When the 3.5mm audio output is connected, it would be extremely handy to have something like a "convenience key" feature. To be more precise: -in Option, Screen/Keyboard, please make a se

  • ISight recording

    I'm getting the new 20 in iMac in June, and have a question about the iSight. Can you record directly to your computer? As an example, record a video message or greeting and email it as a file to someone? Or hypethticaly, as a security device? Have t

  • Converting .MTS files to .MOV files

    I'm converting .MTS files from my new Sony AVCHD camcorder to .MOV files with a software called VoltaicHD. It works well but it takes for ever to transfer large amounts of footage. Anyone aware of a way to batch convert these files? Also, am I loosin

  • Can't find serial number on my Powerbook G4

    I cannot find the serial number on my Powerbook G4 titanium!! I got it from a friend that works at Apple and it wasn't a retail model. There is not serial number inside the battery compartment, nor one listed in the "About this Mac" section???? Pleas