JEditorpane performance limitations

Hi ,
shortly,
we( infact 2 ) friends are devoloping multimedia cd with swing, using jeditorpane..
as viewer
we are showing collection of books,
we prepare colored formated , a lot of html links including,( for tooltips) pages.
content is beatifull. but sometimes very slow. content varies from 1 to 100 pages. at amount bigger than 10 u say it is slow....
question is ,
could we fasten settext in magical way , or what are the bounds, because we tried a lot of ways. i only suspect if there is a perfect way with setinnerhtml.
i tried it also but it is not so fast.
the methods we tried,,
1) we serialize htmldocument : so fast but not good in terms of space and logic,
2) use inner html ; fast but can not be sure if there are better ways
3) did not bring whole content ,made tricks , only bring visible amount ; scrolling has a non smooth nature
4) loaded asyncrohonously; to jeditorpane good butcontent comes slowly but avaliable at partial amount everytime
and much more, i am not sure that i am using all properties..
thanks all for help...

hi thanks,
i have looked but what we are doing is a multimeadi cd,
so i have to access content most of time.
because content have assigned ,mp3 , i color paragraphs while playin mp3,
user is selecting paragraphs, or words to take his own notes, also user can change the styles on it, much more..
i do not finish my job with content and content structure.

Similar Messages

  • SQL Server Express Performance Limitations With OGC Methods on Geometry Instances

    I will front load my question.  Specifically, I am wondering if any of the feature restrictions with SQL Server Express cause performance limitations/reductions with OGC methods on geometry instances, e.g., STIntersects?  I have spent time reading
    various documents about the different editions of SQL Server, including the Features Supported by the Editions of SQL Server 2014, but nothing is jumping out at me.  The
    limited information on spatial features in the aforementioned document implies spatial is the same across all editions.  I am hoping this is wrong.
    The situation....  I have roughly 200,000 tax parcels within 175 taxing districts.  As part of a consistency check between what is stored in tax records for taxing district and what is identified spatially, I set up a basic point-in-polygon query
    to identify the taxing district spatially and then count the number of parcels within in taxing district.  Surprisingly, the query took 66 minutes to run.  As I pointed out, this is being run on a test machine with SQL Server Express.
    Some specifics....  I wrote the query a few different ways and compared the execution plans, and the optimizer always choose the same plan, which is good I guess since it means it is doing its job.  The execution plans show a 'Clustered Index Seek
    (Spatial)' being used and only costing 1%.  Coming in at 75% cost is a Filter, which appears to be connected to the STIntersects predicate.  I brute forced alternate execution plans using HINTS, but they only turned out worse, which I guess is also
    good since it means the optimizer did choose a good plan.  I experimented some with changing the spatial index parameters, but the impact of the options I tried was never that much.  I ended up going with "Geometry Auto Grid" with 16 cells
    per object.
    So, why do I think 66 minutes is excessive?  The reason is that I loaded the same data sets into PostgreSQL/PostGIS, used a default spatial index, and the same query ran in 5 minutes.  Same machine, same data, SQL Server Express is 13x slower than
    PostgreSQL.  That is why I think 66 minutes is excessive.
    Our organization is mostly an Oracle and SQL Server shop.  Since more of my background and experience are with MS databases, I prefer to work with SQL Server.  I really do want to understand what is happening here.  Is there something I can
    do different to get more performance out of SQL Server?  Does spatial run slower on Express versus Standard or Enterprise?  Given I did so little tuning in PostgreSQL, I still can't understand the results I am seeing.
    I may or may not be able to strip the data down enough to be able to send it to someone.

    Tessalating the polygons (tax districts) is the answer!
    Since my use of SQL Server Express was brought up as possibly contributing to the slow runtime, the first thing I did was download an evaluation version of Enterprise Edition.  The runtime on Enterprise Edition dropped from 66 minutes to 57.5 minutes.
     A reduction of 13% isn't anything to scoff at, but total runtime was still 11x longer than in PostgreSQL.  Although Enterprise Edition had 4 cores available to it, it never really spun up more than 1 when executing the query, so it doesn't seem
    to have been parallelizing the query much, if at all.
    You asked about polygon complexity.  Overall, a majority are fairly simple but there are some complex ones with one really complex polygon.  Using the complexity index discussed in the reference thread, the tax districts had an average complexity
    of 4.6 and a median of 2.7.  One polygon had a complexity index of 120, which was skewing the average, as well as increasing the runtime I suspect.  Below is a complexity index breakdown:
    Index
    NUM_TAX_DIST
    1
    6
    <2
    49
    <3
    44
    <4
    23
    <5
    11
    <6
    9
    <7
    9
    <8
    4
    <9
    1
    <10
    4
    >=10
    14
    Before trying tessellation, I tweaked the spatial indexes in several different ways, but the runtimes never changed by more than a minute or two.  I reset the spatial indexes to "geometry auto grid @ 32" and tried out your tessellation functions
    using the default of 5000 vertices.  Total runtime 2.3 minutes, a 96% reduction and twice as fast as PostgresSQL!  Now that is more what I was expecting before i started.
    I tried using different thresholds, 3,000 and 10,000 vertices but the runtimes were slightly slower, 3.5 and 3.3 minutes respectively.  A threshold of 5000 definitely seems to be a sweet spot for the dataset I am using.  As the thread you referenced
    discussed, SQL Server spatial functions like STIntersect appear to be sensitive to the number of vertices of polygons.
    After reading your comment, it reminded me of some discussions with Esri staff about ArcGIS doing the same thing in certain circumstances, but I didn't go as far as thinking to apply it here.  So, thanks for the suggestion and code from another post.
     Once I realized the SRID was hard coded to 0 in tvf_QuarterPolygon, I was able to update the code to set it to the same as the input shape, and then everything came together nicely.

  • WebSAPConsole server performance limits

    Hi all:
    I have been looking for some guidelines around server limits for connection and performance in order to determine whether or not some web server load balancing will be required to manage IP and web services availability.
    Max numbers of warehousing users / transactions per web server ie: IIS 6.0 on Win2003.
    Can some share how many users per server they have effficiently managed?
    50, 100, 500?
    Thanks,
    -alan

    Hi,
    You don't specify which server it.
    For web as, check
    Re: Increasing Heap Size not possible
    Re: Problem with Java.lang.outofmemoryerror
    max number of server processes
    For portal
    check
    https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/b601a14c-0601-0010-909c-ddc7ae0a1071
    Which are you using?
    Eddy
    PS. Which type of SDN Ubergeek/BPX suit are <a href="/people/eddy.declercq/blog/2007/05/14/which-type-of-sdn-ubergeekbpx-suit-are-you">you</a>?
    Deadline: June 15th

  • E55 HSDPA (3.5G) Modem performance limited by Wind...

    Hello,
    I found out a strange question that annoys me too much. Here it is. You know some of E-series devices such as E55 supports 3.5G and thus up to 10.2 mbps speed. BUT in Windows, when you connect your phone in PC Suite mode, then choose "Connect the Internet" and let "One-Touch Access" application open. Click "connect" and wait until connection is established.
    Once the connection has been established, either in Vista or XP, check active connection "status" in "my network connections", you'll see "speed" as 460800 kbps by default and you can increase this value in modem settings up to 921600 kbps which makes the confusion!
    So, why is communication speed is limited with 921600 kbps (at maximum) whereas the achievable connection speed is 10.2 mbps provided by your 3/3.5G operator? Is this related to Pc Suite's network driver and what's the explanation of it?
    Really appreciate comments,
    Thanks a lot! 

    About your resolv.conf copy/pasting - /etc/ppp/resolv.conf and ip-up:
    http://bbs.archlinux.org/viewtopic.php?id=62520
    http://bbs.archlinux.org/viewtopic.php?id=62004
    I think its time for me to add that to the wiki... if its not already there.
    Thou I don't know if it solves your problem in this case, since you said you edited the resolv.conf.
    EDIT: Posted too fast again. I just saw this
    --> primary DNS address 80.251.192.244
    --> secondary DNS address 80.251.192.245
    search bredband.tre.se
    nameserver 80.251.201.177
    nameserver 80.251.201.178
    Same? Purposely put different DNS? /etc/ppp/resolv.conf
    Last edited by mcover (2009-01-20 00:01:22)

  • What are performance limits for HFM 11.1 64-bit?

    Hello All,
    Does anybody tried subj or have any Oracle/Hyperion materials about 64-bit version?
    How much RAM can HFM 64-bit take? (32-bit one could eat 2GB only)
    What it recommended limit for base records number in sub-cubes?
    Thank you in advance.

    Hi Mike,
    In version 11.1.2.3.500, the performance of the GUI has been decreased comparing to previous versions due to the use of the ADF. However, there are parameters in the system that you can change in order to improve the situation. The best approach is to contact an infrastructure consultant who will review the system and edit the OHS/IIS parameters.
    Additionally, there is a rumor that version 11.1.2.4 has solved these issues and hopefully a few more.
    I am sure that you will find more info on EPM System Infrastructure
    Regards,
    Thanos

  • Performance limitations on the CSS - What are they ?

    Hi, we have two CSS11503 and two 11050 boxes which we think could be hitting the limits in how much traffic, concurrent connections etc they can handle.
    Does anyone have a link to the Cisco figures for what these boxes can handle, and is there a way to get this info from the boxes ?
    cheers,
    Mike

    Mike,
    the drop counter in the flow stat increases when there is no FCM memory available. No if the box is overload.
    For your cpu issue, you may check 'sho ip stat' to see what amount of traffic you get.
    There is the possibility of a process going crazy and consuming all CPU, but I do not think you would have both SCM and SAM running very high at the same time.
    You can do a 'cpu hog 1' to see the most recent process. Do it several times.
    If the same process comes again and again that's the one consuming the CPU.
    If it is something like tFlowMgrPktRx it means you are simply receiving a lot of traffic.
    Is your content rule affecting the box a L7 rule ? If yes, try to make it L3/4 to see if that helps. Just to confirm this is related t o traffic.
    Gilles.

  • Performance limitations of SCA components

    With best - practise in mind, How "big and complex" can SCA Components scale without any major perfomance impacts ?

    OR how many webservices should one put in one SCA component ?That's a different question. Ideally one SCA component should expose one Web-service which represents one Business Service with all relevant operations grouped into it.
    Regards,
    Anuj

  • Getting realistic performance expectations.

    I am running tests to see if I can use the Oracle Berkeley XML database as a backend to a web application but am running into query response performance limitations. As per the suggestions for performance related questions, I have pulled together answers to the series of questions that need to be addressed, and they are given below. The basic issue at stake, however, is am I being realistic about what I can expect to achieve with the database?
    Regards
    Geoff Shuetrim
    Oracle Berkeley DB XML database performance.
    Berkeley DB XML Performance Questionnaire
    1. Describe the Performance area that you are measuring? What is the
    current performance? What are your performance goals you hope to
    achieve?
    I am using the database as a back end to a web application that is expected
    to field a large number of concurrent queries.
    The database scale is described below.
    Current performance involves responses to simple queries that involve 1-2
    minute turn around (this improves after a few similar queries have been run,
    presumably because of caching, but not to a point that is acceptable for
    web applications).
    Desired performance is for queries to execute in milliseconds rather than
    minutes.
    2. What Berkeley DB XML Version? Any optional configuration flags
    specified? Are you running with any special patches? Please specify?
    Berkeley DB XML Version: 2.4.16.1
    Configuration flags: enable-java -b 64 prefix=/usr/local/BerkeleyDBXML-2.4.16
    No special patches have been applied.
    3. What Berkeley DB Version? Any optional configuration flags
    specified? Are you running with any special patches? Please Specify.
    Berkeley DB Version? 4.6.21
    Configuration flags: None. The Berkeley DB was built and installed as part of the
    Oracle Berkeley XML database build and installation process.
    No special patches have been applied.
    4. Processor name, speed and chipset?
    Intel Core 2 CPU 6400 @ 2.13 GHz (1066 FSB) (4MB Cache)
    5. Operating System and Version?
    Ubuntu Linux 8.04 (Hardy) with the 2.6.24-23 generic kernel.
    6. Disk Drive Type and speed?
    300 GB 7200RPM hard drive.
    7. File System Type? (such as EXT2, NTFS, Reiser)
    EXT3
    8. Physical Memory Available?
    Memory: 3.8GB DDR2 SDRAM
    9. Are you using Replication (HA) with Berkeley DB XML? If so, please
    describe the network you are using, and the number of Replica’s.
    No.
    10. Are you using a Remote Filesystem (NFS) ? If so, for which
    Berkeley DB XML/DB files?
    No.
    11. What type of mutexes do you have configured? Did you specify
    –with-mutex=? Specify what you find inn your config.log, search
    for db_cv_mutex?
    I did not specify -with-mutex when building the database.
    config.log indicates:
    db_cv_mutex=POSIX/pthreads/library/x86_64/gcc-assembly
    12. Which API are you using (C++, Java, Perl, PHP, Python, other) ?
    Which compiler and version?
    I am using the Java API.
    I am using the gcc 4.2.4 compiler.
    I am using the g++ 4.2.4 compiler.
    13. If you are using an Application Server or Web Server, please
    provide the name and version?
    I am using the Tomcat 5.5 application server.
    It is not using the Apache Portable Runtime library.
    It is being run using a 64 bit version of the Sun Java 1.5 JRE.
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    I do not have a DB_CONFIG file in the database home directory.
    My environment configuration is as follows:
    Threaded = true
    AllowCreate = true
    InitializeLocking = true
    ErrorStream = System.err
    InitializeCache = true
    Cache Size = 1024 * 1024 * 500
    InitializeLogging = true
    Transactional = false
    TrickleCacheWrite = 20
    15. Please provide your Container Configuration Flags?
    My container configuration is done using the Java API.
    The container creation code is:
    XmlContainerConfig containerConfig = new XmlContainerConfig();
    containerConfig.setStatisticsEnabled(true);
    XmlContainer container = xmlManager.createContainer("container",containerConfig);I am guessing that this means that the only flag I have set is the one
    that enables recording of statistics to use in query optimization.
    I have no other container configuration information to provide.
    16. How many XML Containers do you have?
    I have one XML container.
    The container has 2,729,465 documents.
    The container is a node container rather than a wholedoc container.
    Minimum document size is around 1Kb.
    Maximum document size is around 50Kb.
    Average document size is around 2Kb.
    I am using document data as part of the XQueries being run. For
    example, I condition query results upon the values of attributes
    and elements in the stored documents.
    The database has the following indexes:
    xmlIndexSpecification = dataContainer.getIndexSpecification();
    xmlIndexSpecification.replaceDefaultIndex("node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"fragment","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"data","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPINamespace,"xptr","node-element-presence");
    xmlIndexSpecification.addIndex("","stub","node-attribute-presence");
    xmlIndexSpecification.addIndex("","index", "unique-node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRL21LinkNamespace,"label","node-element-substring-string");
    xmlIndexSpecification.addIndex(Constants.GenericLabelNamespace,"label","node-element-substring-string");
    xmlIndexSpecification.addIndex("","name","node-attribute-substring-string");
    xmlIndexSpecification.addIndex("","parentIndex", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","uri", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","type", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetDocumentURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetPointerValue", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","absoluteHref", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","id","node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","value", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","arcroleURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","roleURI", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","name", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","targetNamespace", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","contextRef", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","unitRef", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","scheme", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex("","value", "node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRL21Namespace,"identifier", "node-element-equality-string");           
    xmlIndexSpecification.addIndex(Constants.XMLNamespace,"lang","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"label","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"from","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"to","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"type","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"arcrole","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"role","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XLinkNamespace,"label","node-attribute-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"language","node-element-presence");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"code","node-element-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"value","node-element-equality-string");
    xmlIndexSpecification.addIndex(Constants.XBRLAPILanguagesNamespace,"encoding","node-element-equality-string");17. Please describe the shape of one of your typical documents? Please
    do this by sending us a skeleton XML document.
    The following provides the basic information about the shape of all documents
    in the data store.
    <ns:fragment xmlns:ns="..." attrs...(about 20 of them)>
      <ns:data>
        Single element that varies from document to document but that
        is rarely more than a few small elements in size and (in some cases)
        a lengthy section of string content for the single element.
      </ns:data>
    </ns:fragment>18. What is the rate of document insertion/update required or
    expected? Are you doing partial node updates (via XmlModify) or
    replacing the document?
    Document insertion rates are not a first order performance criteria.
    I do no document modifications using XmlModify.
    When doing updates I replace the original document.
    19. What is the query rate required/expected?
    Not sure how to provide metrics for this but a single web page is
    being generated, this can involve hundreds of queries. each of which
    should be trivial to execute given the indexing strategy in use.
    20. XQuery -- supply some sample queries
    1. Please provide the Query Plan
    2. Are you using DBXML_INDEX_NODES?
              I am using DBXML_INDEX_NODES by default because I
              am using a node container rather than a whole document
              container.
    3. Display the indices you have defined for the specific query.
    4. If this is a large query, please consider sending a smaller
    query (and query plan) that demonstrates the problem.
    Example queries.
    1. collection('browser')/*[@parentIndex='none']
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="none"/>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>2. collection('browser')/*[@stub]
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <PresenceQP container="browser" index="node-attribute-presence-none" operation="eq" child="stub"/>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>3. qplan "collection('browser')/*[@type='org.xbrlapi.impl.ConceptImpl' or @parentIndex='asdfv_3']"
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
            <UnionQP>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="type" value="org.xbrlapi.impl.ConceptImpl"/>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="asdfv_3"/>
            </UnionQP>
          </StepQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>4.
    setnamespace xlink http://www.w3.org/1999/xlink
    qplan "collection('browser')/*[@uri='http://www.xbrlapi.org/my/uri' and */*[@xlink:type='resource' and @xlink:label='description']]"
    <XQuery>
      <QueryPlanToAST>
        <LevelFilterQP>
          <NodePredicateFilterQP uri="" name="#tmp8">
            <StepQP axis="parent-of-child" uri="*" name="*" nodeType="element">
              <StepQP axis="parent-of-child" uri="*" name="*" nodeType="element">
                <NodePredicateFilterQP uri="" name="#tmp1">
                  <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
                    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="label:http://www.w3.org/1999/xlink"
                    value="description"/>
                  </StepQP>
                  <AttributeJoinQP>
                    <VariableQP name="#tmp1"/>
                    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="type:http://www.w3.org/1999/xlink"
                    value="resource"/>
                  </AttributeJoinQP>
                </NodePredicateFilterQP>
              </StepQP>
            </StepQP>
            <AttributeJoinQP>
              <VariableQP name="#tmp8"/>
              <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="uri" value="http://www.xbrlapi.org/my/uri"/>
            </AttributeJoinQP>
          </NodePredicateFilterQP>
        </LevelFilterQP>
      </QueryPlanToAST>
    </XQuery>21. Are you running with Transactions? If so please provide any
    transactions flags you specify with any API calls.
    I am not running with transactions.
    22. If your application is transactional, are your log files stored on
    the same disk as your containers/databases?
    The log files are stored on the same disk as the container.
    23. Do you use AUTO_COMMIT?
    Yes. I think that it is a default feature of the DocumentConfig that
    I am using.
    24. Please list any non-transactional operations performed?
    I do document insertions and I do XQueries.
    25. How many threads of control are running? How many threads in read
    only mode? How many threads are updating?
    One thread is updating. Right now one thread is running queries. I am
    not yet testing the web application with concurrent users given the
    performance issues faced with a single user.
    26. Please include a paragraph describing the performance measurements
    you have made. Please specifically list any Berkeley DB operations
    where the performance is currently insufficient.
    I have loaded approximately 7 GB data into the container and then tried
    to run the web application using that data. This involves running a broad
    range of very simple queries, all of which are expected to be supported
    by indexes to ensure that they do not require XML document traversal activity.
    Querying performance is insufficient, with even the most basic queries
    taking several minutes to complete.
    27. What performance level do you hope to achieve?
    I hope to be able to run a web application that simultaneously handles
    page requests from hundreds of users, each of which involves a large
    number of database queries.
    28. Please send us the output of the following db_stat utility commands
    after your application has been running under "normal" load for some
    period of time:
    % db_stat -h database environment -c
    1038     Last allocated locker ID
    0x7fffffff     Current maximum unused locker ID
    9     Number of lock modes
    1000     Maximum number of locks possible
    1000     Maximum number of lockers possible
    1000     Maximum number of lock objects possible
    155     Number of current locks
    157     Maximum number of locks at any one time
    200     Number of current lockers
    200     Maximum number of lockers at any one time
    13     Number of current lock objects
    17     Maximum number of lock objects at any one time
    1566M     Total number of locks requested (1566626558)
    1566M     Total number of locks released (1566626403)
    0     Total number of locks upgraded
    852     Total number of locks downgraded
    3     Lock requests not available due to conflicts, for which we waited
    0     Lock requests not available due to conflicts, for which we did not wait
    0     Number of deadlocks
    0     Lock timeout value
    0     Number of locks that have timed out
    0     Transaction timeout value
    0     Number of transactions that have timed out
    712KB     The size of the lock region
    21807     The number of region locks that required waiting (0%)
    % db_stat -h database environment -l
    0x40988     Log magic number
    13     Log version number
    31KB 256B     Log record cache size
    0     Log file mode
    10Mb     Current log file size
    0     Records entered into the log
    28B     Log bytes written
    28B     Log bytes written since last checkpoint
    1     Total log file I/O writes
    0     Total log file I/O writes due to overflow
    1     Total log file flushes
    0     Total log file I/O reads
    1     Current log file number
    28     Current log file offset
    1     On-disk log file number
    28     On-disk log file offset
    1     Maximum commits in a log flush
    0     Minimum commits in a log flush
    96KB     Log region size
    0     The number of region locks that required waiting (0%)
    % db_stat -h database environment -m
    500MB     Total cache size
    1     Number of caches
    1     Maximum number of caches
    500MB     Pool individual cache size
    0     Maximum memory-mapped file size
    0     Maximum open file descriptors
    0     Maximum sequential buffer writes
    0     Sleep after writing maximum sequential buffers
    0     Requested pages mapped into the process' address space
    1749M     Requested pages found in the cache (99%)
    722001     Requested pages not found in the cache
    911092     Pages created in the cache
    722000     Pages read into the cache
    4175142     Pages written from the cache to the backing file
    1550811     Clean pages forced from the cache
    19568     Dirty pages forced from the cache
    3     Dirty pages written by trickle-sync thread
    62571     Current total page count
    62571     Current clean page count
    0     Current dirty page count
    65537     Number of hash buckets used for page location
    1751M     Total number of times hash chains searched for a page (1751388600)
    8     The longest hash chain searched for a page
    3126M     Total number of hash chain entries checked for page (3126038333)
    4535     The number of hash bucket locks that required waiting (0%)
    278     The maximum number of times any hash bucket lock was waited for (0%)
    1     The number of region locks that required waiting (0%)
    0     The number of buffers frozen
    0     The number of buffers thawed
    0     The number of frozen buffers freed
    1633189     The number of page allocations
    4301013     The number of hash buckets examined during allocations
    259     The maximum number of hash buckets examined for an allocation
    1570522     The number of pages examined during allocations
    1     The max number of pages examined for an allocation
    184     Threads waited on page I/O
    Pool File: browser
    8192     Page size
    0     Requested pages mapped into the process' address space
    1749M     Requested pages found in the cache (99%)
    722001     Requested pages not found in the cache
    911092     Pages created in the cache
    722000     Pages read into the cache
    4175142     Pages written from the cache to the backing file
    % db_stat -h database environment -r
    Not applicable.
    % db_stat -h database environment -t
    Not applicable.
    vmstat
    r b swpd free buff cache si so bi bo in cs us sy id wa
    1 4 40332 773112 27196 1448196 0 0 173 239 64 1365 19 4 72 5
    iostat
    Linux 2.6.24-23-generic (dell)      06/02/09
    avg-cpu: %user %nice %system %iowait %steal %idle
    18.37 0.01 3.75 5.67 0.00 72.20
    Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
    sda 72.77 794.79 1048.35 5376284 7091504
    29. Are there any other significant applications running on this
    system? Are you using Berkeley DB outside of Berkeley DB XML?
    Please describe the application?
    No other significant applications are running on the system.
    I am not using Berkeley DB outside of Berkeley DB XML.
    The application is a web application that organises the data in
    the stored documents into hypercubes that users can slice/dice and analyse.
    Edited by: Geoff Shuetrim on Feb 7, 2009 2:23 PM to correct the appearance of the query plans.

    Hi Geoff,
    Thanks for filling out the performance questionnaire. Unfortunately the forum software seems to have destroyed some of your queries - you might want to use \[code\] and \[code\] to markup your queries and query plans next time.
    Geoff Shuetrim wrote:
    Current performance involves responses to simple queries that involve 1-2
    minute turn around (this improves after a few similar queries have been run,
    presumably because of caching, but not to a point that is acceptable for
    web applications).
    Desired performance is for queries to execute in milliseconds rather than
    minutes.I think that this is a reasonable expectation in most cases.
    14. Please provide your exact Environment Configuration Flags (include
    anything specified in you DB_CONFIG file)
    I do not have a DB_CONFIG file in the database home directory.
    My environment configuration is as follows:
    Threaded = true
    AllowCreate = true
    InitializeLocking = true
    ErrorStream = System.err
    InitializeCache = true
    Cache Size = 1024 * 1024 * 500
    InitializeLogging = true
    Transactional = false
    TrickleCacheWrite = 20If you are performing concurrent reads and writes, you need to enable transactions in the both the environment and the container.
    Example queries.
    1. collection('browser')/*[@parentIndex='none']
    <XQuery>
    <QueryPlanToAST>
    <LevelFilterQP>
    <StepQP axis="parent-of-attribute" uri="*" name="*" nodeType="element">
    <ValueQP container="browser" index="node-attribute-equality-string" operation="eq" child="parentIndex" value="none"/>
    </StepQP>
    </LevelFilterQP>
    </QueryPlanToAST>
    </XQuery>
    I have two initial observations about this query:
    1) It looks like it could return a lot of results - a query that returns a lot of results will always be slow. If you only want a subset of the results, use lazy evalulation, or put an explicit call to the subsequence() function in the query.
    2) An explicit element name with an index on it often performs faster than a "*" step. I think you'll get faster query execution if you specify the document element name rather than "*", and then add a "node-element-presence" index on it.
    3) Generally descendant axis is faster than child axis. If you just need the document rather than the document (root) element, you might find that this query is a little faster (any document with a "parentIndex" attribute whose value is "none"):
    collection()[descendant::*/@parentIndex='none']Similar observations apply to the other queries you posted.
    Get back to me if you're still having problems with specific queries.
    John

  • Slow performance Storage pool.

    We also encounter performance problems with storage pools.
    The RC is somewhat faster than the CP version.
    Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with “Bursts”.
    Using the ARC 1320IX-16 HBA card is somewhat faster and looks more stable (less bursts).
    Inserting an ARC 1882X RAID card increases speed with a factor 5 – 10.
    Hence hardware RAID on the same hardware is 5 – 10 times faster!
    We noticed that the “Recourse Monitor” becomes unstable (irresponsive) while testing.
    There are no heavy processor loads while testing.
    JanV.
    JanV

    Based on some testing, I have several new pieces of information on this issue.
    1. Performance limited by controller configuration.
    First, I tracked down the underlying root cause of the performance problems I've been having. Two of my controller cards are RAIDCore PCI-X controllers, which I am using for 16x SATA connections. These have fantastic performance for physical disks
    that are initialized with RAIDCore structures (so they can be used in arrays, or even as JBOD). They also support non-initialized disks in "Legacy" mode, which is what I've been using to pass-through the entire physical disk to SS. But for some reason, occasionally
    (but not always) the performance on Server 2012 in Legacy mode is terrible - 8MB/sec read and write per disk. So this was not directly a SS issue.
    So given my SS pools were built on top of disks, some of which were on the RAIDCore controllers in Legacy mode, on the prior configuration the performance of virtual disks was limited by some of the underlying disks having this poor performance. This may
    also have caused the unresponsiveness the entire machine, if the Legacy mode operation had interrupt problems. So the first lesson is - check the entire physical disk stack, under the configuration you are using for SS first.
    My solution is to use all RAIDCore-attached disks with the RAIDCore structures in place, and so the performance is more like 100MB/sec read and write per disk. The problems with this are (a) a limit of 8 arrays/JBOD groups to be presented to the OS (for
    16 disks across two controllers, and (b) loss of a little capacity to RAIDCore structures.
    However, the other advantage is the ability to group disks as JBOD or RAID0 before presenting them to SS, which provides better performance and efficiency due to limitations in SS.
    Unfortunately, this goes against advice at http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx,
    which says "RAID adapters, if used, must be in non-RAID mode with all RAID functionality disabled.". But it seems necessary for performance, at least on RAIDCore controllers.
    2. SS/Virtual disk performance guidelines. Based on testing different configurations, I have the following suggestions for parity virtual disks:
    (a) Use disks in SS pools in multiples of 8 disks. SS has a maximum of 8 columns for parity virtual disks. But it will use all disks in the pool to create the virtual disk. So if you have 14 disks in the pool, it will use all 14
    disks with a rotating parity, but still with 8 columns (1 parity slab per 7 data slabs). Then, and unexpectedly, the write performance of this is a little worse than if you were just to use 8 disks. Also, the efficiency of being able to fully use different
    sized disks is much higher with multiples of 8 disks in the pool.
    I have 32 underlying disks but a maximum of 28 disks available to the OS (due to the 8 array limit for RAIDCore). But my best configuration for performance and efficiency is when using 24 disks in the pool.
    (b) Use disks as similar sized as possible in the SS pool.
    This is about the efficiency of being able to use all the space available. SS can use different sized disks with reasonable efficiency, but it can't fully use the last hundred GB of the pool with 8 columns - if there are different sized disks and there
    are not a multiple of 8 disks in the pool. You can create a second virtual disk with fewer columns to soak up this remaining space. However, my solution to this has been to put my smaller disks on the RAIDCore controller, and group them as RAID0 (for equal
    sized) or JBOD (for different sized) before presenting them to SS. 
    It would be better if SS could do this itself rather than needing a RAID controller to do this. e.g. you have 6x 2TB and 4x 1TB disks in the pool. Right now, SS will stripe 8 columns across all 10 disks (for the first 10TB /8*7), then 8 columns across 6
    disks (for the remaining 6TB /8*7). But it would be higher performance and a more efficient use of space to stripe 8 columns across 8 disk groups, configured as 6x 2TB and 2x (1TB + 1TB JBOD).
    (c) For maximum performance, use Windows to stripe different virtual disks across different pools of 8 disks each.
    On my hardware, each SS parity virtual disk appears to be limited to 490MB/sec reads (70MB/sec/disk, up to 7 disks with 8 columns) and usually only 55MB/sec writes (regardless of the number of disks). If I use more disks - e.g. 16 disks, this limit is
    still in place. But you can create two separate pools of 8 disks, create a virtual disk in each pool, and stripe them together in Disk Management. This then doubles the read and write performance to 980MB/sec read and 110MB/sec write.
    It is a shame that SS does not parallelize the virtual disk access across multiple 8-column groups that are on different physical disks, and that you need work around this by striping virtual disks together. Effectively you are creating a RAID50 - a Windows
    RAID0 of SS RAID5 disks. It would be better if SS could natively create and use a RAID50 for performance. There doesn't seem like any advantage not to do this, as with the 8 column limit SS is using 2/16 of the available disk space for parity anyhow.
    You may pay a space efficiency penalty if you have unequal sized disks by going the striping route. SS's layout algorithm seems optimized for space efficiency, not performance. Though it would be even more efficient to have dynamic striping / variable column
    width (like ZFS) on a single virtual disk, to fully be able to use the space at the end of the disks.
    (d) Journal does not seem to add much performance. I tried a 14-disk configuration, both with and without dedicated journal disks. Read speed was unaffected (as expected), but write speed only increased slightly (from 48MB/sec to
    54MB/sec). This was the same as what I got with a balanced 8-disk configuration. It may be that dedicated journal disks have more advantages under random writes. I am primarily interested in sequential read and write performance.
    Also, the journal only seems to be used if it in on the pool before the virtual disk is created. It doesn't seem that journal disks are used for existing virtual disks if added to the pool after the virtual disk is created.
    Final configuration
    For my configuration, I have now configured my 32 underlying disks over 5 controllers (15 over 2x PCI-X RAIDCore BC4852, 13 over 2x PCIe Supermicro AOC-SASLP-MV8, and 4 over motherboard SATA), as 24 disks presented to Windows. Some are grouped on my RAIDCore
    card to get as close as possible to 1TB disks, given various limitations. I am optimizing for space efficiency and sequential write speed, which are the effective limits for use as a network file share.
    So I have: 5x 1TB, 5x (500GB+500GB RAID0), (640GB+250GB JBOD), (3x250GB RAID0), and 12x 500GB. This gets me 366MB/sec reads (note - for some reason, this is worse than the 490MB/sec when just using 8 of disks in a virtual disk) and 76MB/sec write (better
    than 55MB/sec on a 8-disk group). On space efficiency, I'm able to use all but 29GB in the pool in a single 14,266GB parity virtual disk.
    I hope these results are interesting and helpful to others!

  • Smartview Version 11.1.2 - Performance issue

    I have installed Smartview on couple of User machines and everyone is having same issue. We are using Smartview for Hyperion Planning.
    Installation was fine and I can connect to Hyperion Planning application but after installation users now having lot of issues using Excel.
    Its taking too long when are switching between the sheet, too long to open excel, its doing all the calculations on sheets whenever user clicks on the sheet.
    If this has something to do with the memory on their machines or something else causing this issue.
    Please help me as this is a major issue and effecting lot of users.
    Thanks

    Hi HypUser99,
    Unless I'm going down the wrong path here...
    The issue you are running into is a performance limitation of Excel, likely not impacted by the lack of another Gigabyte of available memory.
    If the laptop policies prevent your users from saving this setting, I encourage you to take that up with your admins. IMHO setting a policy that prevents disabling autocalc is utterly useless and by virtue of common sense, must be a mistake.
    If this option is not available to you, then consider designing Smart View sheets with fewer calculated cells.
    Regards,
    Robb Salzmann

  • Improve Performance with QaaWS with multiple RefreshButtons??

    HI,
    I read, that a connection opens maximal 2 QaaWS. I want to improve Performance.
    Currently I tried to refresh 6 connections with one Button. Would it improve performance if I split this 1 Button with 6 Connections to 3 buttons each 2 connections ?
    Thanks,
    BWBW

    Hi
    HTTP 1.1 limits the number of concurrent HTTP requests to maximum two, so your dashboard will actually be able to send & receive maximum 2 request simultaneously, third will stand-by till one of those first two is handled.
    QaaWS performance is mostly affected by database performance, so if you plan to move to LO to improve performance, I'd recommend you use LO from WebI parts, as if you use LO to consume a universe query, you will experience similar performance limitations.
    If you actually want to consume WebI report parts, and need report filters, you can also consider XI 3.1 SP2 BI Services, where performance is better than QaaWS, and interactions are also easier to implement.
    Hope that helps,
    David.

  • Performance/costs improvements from single SQL database to elastic scale

    Hi there!
    I already posted a previous question on whats the best way to setup a elastic scale architecture for our application(https://social.msdn.microsoft.com/Forums/azure/en-US/82fabac7-137b-46d6-a9f0-5e71e4bbc9eb/using-datadependent-routing-in-combination-with-membership-provider?forum=ssdsgetstarted).
    The next thing we need to know before we can implement elastic scale is if this is really going to help us with our performance and costs where we experiencing problems at the moment.
    Currently we have a single SQL database(p3 800 DTU's).
    We have run queries against our database which can calculate the number of DTU's we need to get good performance for that database.
    My questions are:
    1.When we implement elastic scale, can we really improve our performance and can this lead to saving costs on SQL databases?
    2. Is there maybe a way we can easily test this by running queries or setting up an environment  where we can see real differences in DTU's per Shard?
    Thanks!

    Hi Elmar,
    If you're already hitting performance limits with your P3 database, other than upgrading to the
    new V12 server for improved premium performance, that is the highest you can currently vertically scale.  Thus, it becomes advantageous to scale out to achieve better performance. 
    A small caveat, my answers below are really contingent upon your workload, query patterns, and sharding scheme.
    1.When we implement elastic scale, can we really improve our performance and can this lead to saving costs on SQL databases?
    Absolutely.  If you look at the table below, you can see the ~Cost per DTU/month. For a P3 you are paying $4.65/DTU/month.  If you were to scale out, with the same number of DTUs on an S2, for example, you'd achieve an 67% savings on
    cost for the same number of DTUs (800).  Please keep in mind that there are feature differences between Standard and Premium SKUs as well as total size (250GB vs 500GB) - these may or may not affect your application.
    **Please note that S3 above is the preview price (50% off).
    2. Is there maybe a way we can easily test this by running queries or setting up an environment  where we can see real differences in DTU's per Shard?
    The available DTUs per shard is a constant value.  In principle, 800 DTUs on one P3 database is equivalent in performance capacity of 100 DTUs on eight P1s.  The test you want to perform is a comparison between your scale-up
    solution versus a scale-out solution as perceived by both the database %DTUs consumed and the response time/throughput of your client application. 

  • IDSM-2 Performance

    IDSM-2 gives 500Mbps in IPS mode and 600Mbpgs in IDS mode. Bundling 4 IDSM-2 in single chassis gives 2Gbps performance with Sup 32. But the FWSM provides 5Gbps throughput and the Sup 720 supports 40Gbps switching. What is the disconnect here? How do you design your IDSM-2s to support 5Gbps throughput when you have a single FWSM supporting 5Gbps?

    If you exceed the monitoring capability of the sensor, then packets that can not be monitored will be dropped by the sensor.
    NOTE: 500Mbps is not an absolute performance number for the sensor. It is a performance level that the sensor has been testeed to be able to handle for specific types of traffic used in the performance test. It is unknown exactly how much traffic the sensor will be able to handle for your network. The IDSM-2 will likely handle AROUND 500 Mbps is many and even most customer networks. However, networks do vary and in some networks it may handle quite a bit less traffic, and in other networks might handle even more.
    So the question isn't what will happen if you send more than 500 Mbps, but rather what will happen if you send more of your traffic than what the sensor is able to monitor. And the answer is that any traffic that can not be monitored because of performance limitations will be dropped by the sensor.
    The only time packets are forwarded without inspection is if sensorApp has stopped monitoring ALL packets (either a reconfiguration or upgrade is taking place, or the sensorApp process has crashed) AND the auot software bypass functionality has kicked in. In which case ALL packets would be forwarded without analysis.

  • Performance of WAN Port

    Hi there,
    I ordered a 120 Mbit/s line from my provider. After some speed issues I had a online session with the service provider today. I was getting 80-90 Mbit/s ....
    He advised me to take out the AirPort Extreme between the Cable Modem and my Mac Mini. First of all I did not think this will bring any improvement since the Modem is GBit Ethernet and the Mac Mini and Airport Extreme too....
    As he insisted on that I took it out and what happened? Internet performance went up from 80-90 to 110-115 Mbits.
    So actually there is a real cut in performance - is this a general performance limitation, like firewall/nat or whatever is making airport busy and not letting trough more than 100 Mbit .... or is the WAN port only 100Mbit?
    Is there anything to tune?
    Oh by the way the extreme is 2 weeks old - so it is latest model (MC)
    Thanks in advance!

    I've owned several routers from different manufacturers and have worked with several Internet service providers and have found that the routing process will cause, on average, a 15-20% drop in speed when compared to a straight through connection from the modem to a computer. Sometimes, its a bit more loss and sometimes a bit less.
    There is no doubt that the the NAT firewall accounts for some of this loss in the routing process and the process of sharing a single connection from the modem is likely to cause some loss as well.
    If had a 100 Mpbs service from the provider, I would expect on average to see speeds in the 80-85 Mbps range using an ethernet connection through the router. The Internet connection speed often varies on most providers during the day depending on the number of users on the system.
    If you are talking about wireless speed, there is more speed loss here through the router due to the security encryption process.
    Now, the results of other users may vary. Hopefully, they will post to give us more perspective. Based on the different routers I've used with different providers, I would say that your results are in the range that I would expect.

  • Limited user license

    Hi! All,
    Can anyone tell me the functionlities covered under limited user license. Specifically i want to know if limited user license can be used for release/approval process.
    Thank you for your help.
    SM

    Hi,
    As per my understanding, limited professional users are upper management people who do not carry out transactions in the system. They only use reports to check out data.
    Here is what SAP documentation says:
    <b>
    A mySAP Limited Professional (Cat. III) user is a named user who performs limited operational related roles supported by the software. In particular employees of business third parties are to be licensed as mySAP Limited Professional users.
    Those employees of business third parties that are performing functions or roles normally performed by the customer's employees, for example independent contractors, consultants or temporary employees need to be licensed as Professional Users.
    The mySAP Limited Professional User license includes the rights granted under a mySAP Employee User license.
    </b>
    So if you are talking about line managers who approve/reject e.g. invoives, sales orders, purchase orders, etc I think you are going to need professional user license. <b>I am open to correction though</b>
    Please check with your SAP Account Manager to confirm this.
    Regards

Maybe you are looking for