Poor performance of ReportingService web API ListChildren() method

Hi,
I am facing slowness issues when fetching files and folder list from report server. See fiddler report below. There is around 1000 files and folder on server and it is taking around 5 sec. I tried some of solution as shown below , but no luck.
I am calling public CatalogItem[] ListChildren(string ItemPath, bool Recursive) method to get list of item from server.
https://abc.com/ReportManager_Perf/Pages/Folder.aspx?ItemPath=%2f&ViewMode=List
ACTUAL PERFORMANCE
ClientConnected:     09:39:39.278
ClientBeginRequest:  09:39:42.340
GotRequestHeaders:   09:39:42.340
ClientDoneRequest:   09:39:42.340
Determine Gateway:   0ms
DNS Lookup:          0ms
TCP/IP Connect:      0ms
HTTPS Handshake:     0ms
ServerConnected:     09:39:39.627
FiddlerBeginRequest: 09:39:42.346
ServerGotRequest:    09:39:42.346
ServerBeginResponse: 09:39:46.547
GotResponseHeaders:  09:39:46.547
ServerDoneResponse:  09:39:47.554
ClientBeginResponse: 09:39:47.554
ClientDoneResponse:  09:39:47.556
Overall Elapsed:  0:00:05.216
https://..../ReportManager_Perf/Pages/Folder.aspx?ItemPath=%2fPegasus%2fReports&ViewMode=List
ACTUAL PERFORMANCE
ClientConnected:     09:39:47.651
ClientBeginRequest:  09:40:02.313
GotRequestHeaders:   09:40:02.313
ClientDoneRequest:   09:40:02.314
Determine Gateway:   0ms
DNS Lookup:          0ms
TCP/IP Connect:      0ms
HTTPS Handshake:     0ms
ServerConnected:     09:39:47.978
FiddlerBeginRequest: 09:40:02.315
ServerGotRequest:    09:40:02.315
ServerBeginResponse: 09:40:06.399
GotResponseHeaders:  09:40:06.399
ServerDoneResponse:  09:40:07.272
ClientBeginResponse: 09:40:07.273
ClientDoneResponse:  09:40:07.274
Overall Elapsed:  0:00:04.960
https://.../ReportManager_Perf/Pages/Folder.aspx?ItemPath=%2fPegasus%2fReports%2fTo+Be+Published&ViewMode=List
ACTUAL PERFORMANCE
ClientConnected:     09:39:47.651
ClientBeginRequest:  09:40:15.896
GotRequestHeaders:   09:40:15.896
ClientDoneRequest:   09:40:15.896
Determine Gateway:   0ms
DNS Lookup:          0ms
TCP/IP Connect:      0ms
HTTPS Handshake:     0ms
ServerConnected:     09:39:47.978
FiddlerBeginRequest: 09:40:15.896
ServerGotRequest:    09:40:15.896
ServerBeginResponse: 09:40:20.378
GotResponseHeaders:  09:40:20.378
ServerDoneResponse:  09:40:23.267
ClientBeginResponse: 09:40:23.281
ClientDoneResponse:  09:40:23.289
Overall Elapsed:  0:00:07.392
I tried to find some work around
Reference:
http://blog.stevienova.com/2009/01/27/sql-server-2005-reporting-services-fix-slow-loading-on-first-report-load/  - This is
also working for SQL Server 2008
Design Consideration while calling the Web Service
Consider buffering. By default, the BufferResponse configuration
setting is set to true, to ensure that the response is completely buffered before returning to the client. This default setting is good for small amounts of data. For large amounts of data, consider disabling buffering, as shown in the
following code snippet.
[WebMethod(BufferResponse=false)]
public string GetTextFile() {
  // return large amount of data
To determine whether or not to enable or disable buffering for your application, measure performance with and without buffering.
Consider caching responses. For applications that deal
with relatively static data, consider caching the responses to avoid accessing the database for every client request. You can use the CacheDuration attribute to specify the number of seconds the response should be cached in server
memory, as shown in the following code snippet.
[WebMethod(CacheDuration=60)]
public string GetSomeDetails() {
  // return large amount of data
Note that because caching consumes server memory, it might not be appropriate if your Web method returns large amounts of data or data that frequently changes
Enable session state only for Web methods that need it.
Session state is disabled by default. If your Web service needs to maintain state, then you can set theEnableSession attribute to true for a specific Web method, as shown in the following code snippet.
[WebMethod(EnableSession=true)]
public string GetSomeDetails() {
  // return large amount of data
Note that clients must also maintain an HTTP cookie to identify the state between successive calls to the Web method.

I have been able to improve processing speed up to
6-8 times with these two techniques:
1. A separate trickle thread was created that would
periodically call DbEnv::memp_trickle. This works
especially good on multicore machines, but also
speeds things up a bit on single CPU boxes. This
alone improved speed from 2K rec/sec to about 4K
rec/sec.Hello Stone,
I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
1. what was the % of clean pages that you specified?
2. What duration were you clling this thread to call memp_trickle?
This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
Regards,
Nishith.
>
2. Maintaining multiple secondary databases in real
time proved to be the bottleneck. The code was
changed to create secondary databases at the end of
the run (calling Db::associate with the DB_CREATE
flag), right before the reports are generated, which
use these secondary databases. This improved speed
from 4K rec/sec to 14K rec/sec.

Similar Messages

  • Bad performance of Java Web Report in BI 7

    We are experiencing poor performance with Java web application(EP)
    comparing to that with ABAP web.
    We migrated our BI reports from the ABAP web interface to the Java web
    interface due to our upgrade to Netweaver2004 environment.
    The problem is that it takes much longer time to load the BI reports in
    Java web than in ABAP web.
    It is the same situation in RSRT. Java web needs more time to execute
    the same query.
    there is no EP logon performance delay, so I think the EP is configured good.
    But the response time of BI Java Web Report is delayed average of 4~5
    seconds in every Query compared in that of ABAP Web report.
    Anyone experience this situation?

    Hi,
    I am facing similar problem. How did you solve it?
    Regards,
    Apurva

  • Poor performance with WebI and BW hierarchy drill-down...

    Hi
    We are currently implementing a large HR solution with BW as backend
    and WebI and Xcelcius as frontend. As part of this we are experiencing
    very poor performance when doing drill-down in WebI on a BW hierarchy.
    In general we are experiencing ok performance during selection of data
    and traditional WebI filtering - however when using the BW hierarchy
    for navigation within WebI, response times are significantly increasing.
    The general solution setup are as follows:
    1) Business Content version of the personnel administration
    infoprovider - 0PA_C01. The Infoprovider contains 30.000 records
    2) Multiprovider to act as semantic Data Mart layer in BW.
    3) Bex Query to act as Data Mart Query and metadata exchange for BOE.
    All key figure restrictions and calculations are done in this Data Mart
    Query.
    4) Traditionel BO OLAP universe 1:1 mapped to Bex Data Mart query. No
    calculations etc. are done in the universe.
    5) WebI report with limited objects included in the WebI query.
    As we are aware that performance is an very subjective issues we have
    created several case scenarios with different dataset sizes, various
    filter criteria's and modeling techniques in BW.
    Furthermore we have tried to apply various traditional BW performance
    tuning techniques including aggregates, physical partitioning and pre-
    calculation - all without any luck (pre-calculation doesn't seem to
    work at all as WebI apparently isn't using the BW OLAP cache).
    In general the best result we can get is with a completely stripped WebI report without any variables etc.
    and a total dataset of 1000 records transferred to WebI. Even in this scenario we can't get
    each navigational step (when using drill-down on Organizational Unit
    hierarchy - 0ORGUNIT) to perform faster than minimum 15-20 seconds per.
    navigational step.
    That is each navigational step takes 15-20 seconds
    with only 1000 records in the WebI cache when using drill-down on org.
    unit hierachy !!.
    Running the same Bex query from Bex Analyzer with a full dataset of
    30.000 records on lowest level of detail returns a threshold of 1-2
    seconds pr. navigational step thus eliminating that this should be a BW
    modeling issue.
    As our productive scenario obviously involves a far larger dataset as
    well as separate data from CATS and PT infoproviders we are very
    worried if we will ever be able to utilize hierarchy drill-down from
    WebI ?.
    The question is as such if there are any known performance issues
    related to the use of BW hierarchy drill-down from WebI and if so are
    there any ways to get around them ?.
    As an alternative we are currently considering changing our reporting
    strategy by creating several higher aggregated reports to avoid
    hierarchy navigation at all. However we still need to support specific
    division and their need to navigate the WebI dataset without
    limitations which makes this issue critical.
    Hope that you are able to help.
    Thanks in advance
    /Frank
    Edited by: Mads Frank on Feb 1, 2010 9:41 PM

    Hi Henry, thank you for your suggestions although i´m not agree with you that 20 seconds is pretty good for that navigation step. The same query executed with BEx Analyzer takes only 1-2 seconds to do the drill down.
    Actions
    suppress unassigned nodes in RSH1: Magic!! This was the main problem!!
    tick use structure elements in RSRT: Done it.
    enable query stripping in WebI: Done it.
    upgrade your BW to SP09: Has the SP09 some inprovements in relation to this point ?
    use more runtime query filters. : Not possible. Very simple query.
    Others:
    RSRT combination H-1-3-3-1 (Expand nodes/Permanent Cache BLOB)
    Uncheck prelimirary Hierarchy presentation in Query. only selected.
    Check "Use query drill" in webi properties.
    Sorry for this mixed message but when i was answering i tryied what you suggest in relation with supress unassigned nodes and it works perfectly. This is what is cusing the bottleneck!! incredible...
    Thanks a lot
    J.Casas

  • Trying to understand the various methods of the Office Web API

    Looking at the Office Web API, I don't see explanations, details of what a specific Office Web API method does. Is one suppose to experiment to discovered what they do, or is there some place I need to look for details?
    Thanks
    Jim

    ​Hi JimBassett,
    What do you mean by "Office Web API"? Do you mean javascript api for office? If so, I think you could turn to the link below for explanations, details of the methods.
    #JavaScript API for Office
    https://msdn.microsoft.com/en-us/library/office/fp142185.aspx
    If you mean the Office 365 API, here is a article for your reference:
    #Overview of developing on the Office 365 platform​
    https://msdn.microsoft.com/en-us/office/office365/howto/platform-development-overview
    Best Regards,
    Edward
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
    Click HERE to participate the survey.

  • Poor performance of web dynpro application

    Hi,
    I have developed web dynpro application which fetches data from R3 system using JCO connection.Large amount of data is transferred between R3 and application,because of which it is taking too long for displaying result.
    After displaying timestamp before and after RFC execution code I came to know that, it is taking approx. 5 min for RFC execution, resulting in poor performance.Time taken for rest of processing is negligible.Is there any way by which I can reduce time for RFC execution or data transfer?
    Thanks in advance,
    Apurva

    HI Apurva,
    I think u r displaying the whole data at a stretch in the front end. So it will take some time for rendring. So try to reduce the display elements (For example, for tables, display only 10 rows at a time).
    regards
    Fahad Hamsa

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Are mutliple database calls really significant with a network call for a web API?

    At one of my employers, we worked on a REST (but it also applies to SOAP) API. The client, which is the application UI, would make calls over the web (LAN in typical production deployments) to the API. The API would make calls to the database.
    One theme that recurs in our discussions is performance: some people on the team believe that you should not have multiple database calls (usually reads) from a single API call because of performance; you should optimize them so that each API call has only
    (exactly) one database call.
    But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory and execute reads very, very quickly (eg. SQL Server loads and
    keeps everything in RAM and consumes almost all your free RAM if it can).
    TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why?
    To be clear, I'm talking about order of magnitude -- I know that it depends on specifics (machine hardware, choice of API and DB, etc.) If I have a call that takes O(milliseconds), does optimizing for DB calls that take an order of magnitude less, actually
    matter? Or is there more to the problem than this?
    Edit: for posterity, I think it's quite ridiculous to make claims that we need to improve performance by combining database calls under these circumstances -- especially
    with a lack of profiling. However, it's not my decision whether we do this or not; I want to know what the rationale is behind thinking this is a correct way of optimizing web API calls.

    But is that really important? Consider that the UI has to make a network call to the API; that's pretty big (order of magnitude of milliseconds). Databases are optimized to keep things in memory
    and execute reads very, very quickly (eg. SQL Server loads and keeps everything in RAM and consumes almost all your free RAM if it can).
    The Logic
    In theory, you are correct. However, there are a few flaws with this rationale:
    From what you stated, it's unclear if you actually tested / profiled your app. In other words, do you actually know that
    the network transfers from the app to the API are the slowest component? Because that is intuitive, it is easy to assume that it is. However, when discussing performance, you should never assume. At my employer, I am the performance lead. When I first joined,
    people kept talking about CDN's, replication, etc. based on intuition about what the bottlenecks must be. Turns out, our biggest performance problems were poorly performing database queries.
    You are saying that because databases are good at retrieving data, that the database is necessarily running at peak performance, is being used optimally, and there is nothing that can be done
    to improve it. In other words, databases are designed to be fast, so I should never have to worry about it. Another dangerous line of thinking. That's like saying a car is meant to move quickly, so I don't need to change the oil.
    This way of thinking assumes a single process at a time, or put another way, no concurrency. It assumes that one request cannot influence another request's performance. Resources are shared,
    such as disk I/O, network bandwidth, connection pools, memory, CPU cycles, etc. Therefore, reducing one database call's use of a shared resource can prevent it from causing other requests to slow down. When I first joined my current employer, management believed
    that tuning a 3 second database query was a waste of time. 3 seconds is so little, why waste time on it? Wouldn't we be better off with a CDN or compression or something else? But if I can make a 3 second query run in 1 second, say by adding an index, that
    is 2/3 less blocking, 2/3 less time spent occupying a thread, and more importantly, less data read from disk, which means less data flushed out of the in-RAM cache.
    The Theory
    There is a common conception that software performance is simply about speed.
    From a purely speed perspective, you are right. A system is only as fast as its slowest component. If you have profiled your code and found that the Internet is the slowest component, then everything else is obviously not the slowest part.
    However, given the above, I hope you can see how resource contention, lack of indexing, poorly written code, etc. can create surprising differences in performance.
    The Assumptions
    One last thing. You mentioned that a database call should be cheap compared to a network call from the app to the API. But you also mentioned that the app and the API servers are in the same LAN. Therefore, aren't both of them comparable as network calls? In
    other words, why are you assuming that the API transfer is orders of magnitude slower than the database transfer when they both have the same available bandwidth? Of course the protocols and data structures are different, I get that, but I dispute the assumption
    that they are orders of magnitude different.
    Where it gets murkey
    This whole question is about "multiple" versus "single" database calls. But it's unclear how many are multiple. Because of what I said above, as a general rule of thumb, I recommend making as few database calls as necessary. But that is
    only a rule of thumb.
    Here is why:
    Databases are great at reading data. They are storage engines. However, your business logic lives in your application. If you make a rule that every API call results in exactly one database call, then your business logic may end up in the database. Maybe that
    is ok. A lot of systems do that. But some don't. It's about flexibility.
    Sometimes to achieve good decoupling, you want to have 2 database calls separated. For example, perhaps every HTTP request is routed through a generic security filter which validates from the DB that the user has the right access rights. If they do, proceed
    to execute the appropriate function for that URL. That function may interact with the database.
    Calling the database in a loop. This is why I asked how many is multiple. In the example above, you would have 2 database calls. 2 is fine. 3 may be fine. N is not fine. If you call the database in a loop, you have now made performance linear, which means it
    will take longer the more that is in the loop's input. So categorically saying that the API network time is the slowest completely overlooks anomalies like 1% of your traffic taking a long time due to a not-yet-discovered loop that calls the database 10,000
    times.
    Sometimes there are things your app is better at, like some complex calculations. You may need to read some data from the database, do some calculations, then based on the results, pass a parameter to a second database call (maybe to write some results). If
    you combine those into a single call (like a stored procedure) just for the sake of only calling the database once, you have forced yourself to use the database for something which the app server might be better at.
    Load balancing: You have 1 database (presumably) and multiple load balanced application servers. Therefore, the more work the app does and the less the database does, the easier it is to scale because it's generally easier to add an app server than setup database
    replication. Based on the previous bullet point, it may make sense to run a SQL query, then do all the calculations in the application, which is distributed across multiple servers, and then write the results when finished. This could give better throughput
    (even if the overall transaction time is the same).
    TL;DR
    TLDR: Is it really significant to worry about multiple database calls when we are already making a network call over the LAN? If so, why?
    Yes, but only to a certain extent. You should try to minimize the number of database calls when practical, but don't combine calls which have nothing to do with each other just for the sake of combining them. Also, avoid calling the database in a loop at all
    costs.

  • Poor performance of new SQL Azure Standard database

    This is not new information. The revised SQL models (basic|standard|premium) perform poorly compared to the earlier web|business databases.  We are talking orders of magnitude - over 4 minutes to perform an update vs. 19 seconds. (UPDATE Invoice SET
    SalesOrderID = O.SalesOrderID FROM Invoice INNER JOIN SalesOrder AS O ON Invoice.InvoiceID = O.InvoiceID for 196043 rows.
    Microsoft is saying we can only use the web database until September, 2015.  Moving to new model (tried standard S2) will cause the project to fail.
    There are numerous posts on the Internet identifying this problem.  How do we get Microsoft's attention?  This is an Azure killer. Fortunately for us, there are a number of other hosting solutions
    available.
    If this problem is not resolved in the next few months, we will be forced to abandon Microsoft Azure!
    Jim Rand

    Our application is a desktop application that communicates to the web role using a single WCF call. In the server pipeline, a call is made to a method that looks like this, except all the error trapping is removed here for brevity:
    public static Response Process(Request request)
    DateTime startDate = DateTime.UtcNow;
    Agents.Agent agent = Agents.AgentFactory.GetAgent(request);
    Response response = agent.ProcessRequest();
    response.ServiceTime = DateTime.UtcNow - startDate;
    return response;
    While building this application over the last year, we did occasional performance testing with the Windows client reporting on logout to the server the mean service time for a complete session. Quite frankly, I was amazed at the performance.  While
    slightly slower than the development machine, the performance was acceptable from the user perspective over the Internet.
    Not so anymore. The mean service time on the Azure server has increased dramatically resulting in timeouts. 
    We will be sticking with the Web edition for one more month during development. At that time, we will switch to Premium(P1) for user acceptance testing.  It should be interesting see what the mean, median and standard deviation session server statistics
    are.
    The performance of SqlAzure web edition is no longer acceptable.  I sure hope Premium(P1) makes it.
    Jim Rand

  • Poor performance on admin console after adding 1k+ teams and member profile

    We run v7sp3p2 (MS) now but even back on v5.1 see degraded performance in form of response times over 60 seconds when browsing security hierarchy in admin console after adding over 1000 teams and member access profiles.  We need the granularity in access for our many users.  Does anyone know any tricks to prevent the glacial and disappointing response times while maintaining the necessary security?  This behavior reflects poorly on the product's scalability.
    Thanks,
    Erik

    Sorin, I want to make sure I understand your recommendation.
    First, we do have more than 1000 users.  Each location has a unique team to which their users belong, and each of these teams has a member access profile with corresponding read/write access to the dimension member representing their location.  The users at each location only view data for their own location.
    Is your recommendation to use another interface besides the admin console for accomplishing security updates?
    We have a custom package that uses an API to upload data files with mass updates to security assignments and definitions, but hesitate to use this method for mundane changes for add/remove/change just a few users as this method bypasses the domain validation we get on the front end wherein we can only add users to the domain they correctly belong to.
    To dodge the risk of a bad user/domain matchup we'd like to use front end but it appears to not support our scale well.
    Thoughts on a setting or configuration we could manipulate to resolve the poor performance would be great - what levers can we pull?  If this is all the tool can support we just live with it and pay the cost in wasted man hours over the life of the product...

  • Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running google maps app on the phone. Siri cannot seem to get me to a specific address. Where does the problem lie? Thanks.

    Apple maps has received a poor performance rating just after introduction of the iPhone 5. I am running Google Maps app on the phone. SIRI cannot seem to get me to a specific address. Where does the problem lie? Also can anyone tell me the hierarchy of use between the Apple Maps, SIRI, and Google maps when the app is on the phone? How do you choose one over the other as the default map usage? Or better still how do you suppress SIRI from using the Apple maps app when requesting a "go to"?
    I have placed an address location into the CONTACTS list and when I ask SIRI to "take me there" it found a TOTALLY different location in the metro area with the same street name. I have included the address, the quadrant, (NE) and the ZIP code into the CONTACTS list. As it turns out, no amount of canceling the trip or relocating the address in the CONTACTS list line would prevent SIRI from taking me to this bogus location. FINALLY I typed in Northeast for NE in the CONTACTS list (NE being the accepted method of defining the USPS location quadrant) , canceled the current map route and it finally found the correct address. This problem would normally not demand such a response from me to have it fixed but the address is one of a hospital in the center of town and this hospital HAS a branch location in a similar part of town (NOT the original address SIRI was trying to take me to). This screw up could be dangerous if not catastrophic to someone who was looking for a hospital location fast and did not know of these two similar locations. After all the whole POINT of directions is not just whimsical pasttime or convenience. In a pinch people need to rely on this function. OR, are my expectations set too high? 
    How does the iPhone select between one app or the other (Apple Maps or Gppgle Maps) as it relates to SIRI finding and showing a map route?  
    Why does SIRI return an address that is NOT the correct address nor is the returned location in the requested ZIP code?
    Is there a known bug in the CONTACTS list that demands the USPS quadrant ID be spelled out, as opposed to abreviated, to permit SIRI to do its routing?
    Thanks for any clarification on these matters.

    siri will only use apple maps, this cannot be changed. you could try google voice in the google app.

  • General poor performance of my Mac mini

    I have been using my mac mini for over a year, it continues to frustrate me how slow and unresponsive it can be even though I do very little with it, i.e organise photos and browse the web - usually looking for solutions to the poor performance problem!
    I came a cross this app that has given me lots of information, unfortunately with my limited IT know-how there is nothing here that gives me any clues, as to the cause of the problem. I guess the red font is not good?
    I'm sorry i can't be more specific about the problem other than to say that iPhoto and Safari the two programmes i use way more often than any other, keep giving me the beach ball. My internet speed is ok 25 Mbps down - 4.5 Mbps
    Is there anything here that stands out as being wrong? if so any suggestions/ instructions for what I should do?
    p.s. i had to quit Safari while writing this and have a report for this if it helps?
    Any help much appreciated.
    Problem description:
    My mac is just generally very slow. I only use it to look at the internet and manage photos.
    EtreCheck version: 2.1.5 (108)
    Report generated 3 January 2015 16:03:59 GMT
    Click the [Support] links for help with non-Apple products.
    Click the [Details] links for more information about that line.
    Click the [Adware] links for help removing adware.
    Hardware Information: ℹ️
      Mac mini (Late 2012) (Verified)
      Mac mini - model: Macmini6,2
      1 2.3 GHz Intel Core i7 CPU: 4-core
      4 GB RAM Upgradeable
      BANK 0/DIMM0
      2 GB DDR3 1600 MHz ok
      BANK 1/DIMM0
      2 GB DDR3 1600 MHz ok
      Bluetooth: Good - Handoff/Airdrop2 supported
      Wireless:  en1: 802.11 a/b/g/n
    Video Information: ℹ️
      Intel HD Graphics 4000
      DELL U2412M 1920 x 1200
    System Software: ℹ️
      OS X 10.10.1 (14B25) - Uptime: 5 days 22:14:3
    Disk Information: ℹ️
      APPLE HDD HTS541010A9E662 disk0 : (1 TB)
      EFI (disk0s1) <not mounted> : 210 MB
      Macintosh HD (disk0s2) / : 999.35 GB (759.01 GB free)
      Recovery HD (disk0s3) <not mounted>  [Recovery]: 650 MB
    USB Information: ℹ️
      HP Photosmart B110 series
      Apple Inc. BRCM20702 Hub
      Apple Inc. Bluetooth USB Host Controller
      Apple, Inc. IR Receiver
    Thunderbolt Information: ℹ️
      Apple Inc. thunderbolt_bus
    Gatekeeper: ℹ️
      Mac App Store and identified developers
    Problem System Launch Daemons: ℹ️
      [killed] com.apple.AssetCacheLocatorService.plist
      [killed] com.apple.coreservices.appleid.passwordcheck.plist
      [killed] com.apple.ctkd.plist
      [killed] com.apple.wdhelper.plist
      [killed] com.apple.xpc.smd.plist
      5 processes killed due to memory pressure
    Launch Daemons: ℹ️
      [loaded] com.adobe.fpsaud.plist [Support]
    User Login Items: ℹ️
      iTunesHelper Application (/Applications/iTunes.app/Contents/MacOS/iTunesHelper.app)
      Dropbox Application (/Applications/Dropbox.app)
      Wondershare Helper Compact Application (/Users/[redacted]/Library/Application Support/Helper/Wondershare Helper Compact.app)
    Internet Plug-ins: ℹ️
      Silverlight: Version: 5.1.30514.0 - SDK 10.6 [Support]
      FlashPlayer-10.6: Version: 16.0.0.235 - SDK 10.6 [Support]
      Flash Player: Version: 16.0.0.235 - SDK 10.6 [Support]
      QuickTime Plugin: Version: 7.7.3
      Unity Web Player: Version: UnityPlayer version 4.5.5f1 - SDK 10.6 [Support]
      Default Browser: Version: 600 - SDK 10.10
    3rd Party Preference Panes: ℹ️
      Flash Player  [Support]
    Time Machine: ℹ️
      Auto backup: YES
      Volumes being backed up:
      Macintosh HD: Disk size: 999.35 GB Disk used: 240.33 GB
      Destinations:
      Time Capsule [Network]
      Total size: 2.00 TB
      Total number of backups: 56
      Oldest backup: 2014-09-30 04:29:13 +0000
      Last backup: 2015-01-03 15:37:35 +0000
      Size of backup disk: Adequate
      Backup size 2.00 TB > (Disk used 240.33 GB X 3)
    Top Processes by CPU: ℹ️
        110% com.apple.WebKit.Plugin.64
          6% WindowServer
          1% Activity Monitor
          1% coreaudiod
          1% sysmond
    Top Processes by Memory: ℹ️
      1.11 GB com.apple.WebKit.Plugin.64
      73 MB iTunes
      49 MB com.apple.WebKit.WebContent
      39 MB mds
      39 MB WindowServer
    Virtual Memory Information: ℹ️
      68 MB Free RAM
      1.07 GB Active RAM
      1.02 GB Inactive RAM
      893 MB Wired RAM
      63.93 GB Page-ins
      2.15 GB Page-outs
    Diagnostics Information: ℹ️
      Jan 1, 2015, 11:43:39 AM /Library/Logs/DiagnosticReports/com.apple.WebKit.Plugin.64_2015-01-01-114339_[r edacted].cpu_resource.diag [Details]
      Jan 1, 2015, 11:32:46 AM /Library/Logs/DiagnosticReports/WindowServer_2015-01-01-113246_[redacted].crash
      Jan 1, 2015, 11:00:30 AM /Library/Logs/DiagnosticReports/com.apple.WebKit.Plugin.64_2015-01-01-110030_[r edacted].cpu_resource.diag [Details]
      Jan 1, 2015, 10:19:03 AM /Library/Logs/DiagnosticReports/iBooks_2015-01-01-101903_[redacted].cpu_resourc e.diag [Details]

    Hi Linc Davis
    Thank you for taking the time to respond.
    I got this information from a time when I was opening up iPhoto, and then going to the iCloud folder. To be honest it wasn't the most disastrous of events, I did get a few beachballs and when I opened up a shared folder the photos weren't loaded as normal. As I get more lengthy delays in carrying out any activities I will post back the console results.
    Just as a bit more info, I don't consider my problem an update to Yosemite problem, if anything since updating performance has been improved from Mavericks.
    Do you have any comments about upgrading RAM from the 4GB I have?
    Should I act on any of the messages in red from the EtreCheck report?
    Many Thanks
    Duncan
    03/01/2015 19:30:18.331 com.apple.iCloudHelper[12481]: objc[12481]: Class FALogging is implemented in both /System/Library/PrivateFrameworks/FamilyCircle.framework/Versions/A/FamilyCircl e and /System/Library/PrivateFrameworks/FamilyNotification.framework/Versions/A/Famil yNotification. One of the two will be used. Which one is undefined.
    03/01/2015 19:30:18.391 com.apple.xpc.launchd[1]: (com.apple.imfoundation.IMRemoteURLConnectionAgent) The _DirtyJetsamMemoryLimit key is not available on this platform.
    03/01/2015 19:30:19.162 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.162 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:19.163 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.163 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:19.186 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.187 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:30:19.188 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:30:25.446 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 10.70 seconds (server forcibly re-enabled them after 1.00 seconds)
    03/01/2015 19:30:33.548 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:30:33.919 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:30:34.077 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:30:34.605 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:30:34.774 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:30:34.853 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:30:47.562 pkd[12123]: FCIsAppAllowedToLaunchExt [343] -- *** _FCMIGAppCanLaunch timed out. Returning false.
    03/01/2015 19:31:10.695 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:31:10.807 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:31:16.961 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:31:18.514 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:31:18.593 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:31:22.538 discoveryd[49]: Basic Sockets SetDelegatePID() failed for PID[3491] errno[3] result[-1]
    03/01/2015 19:31:27.912 mds[32]: (DiskStore.Normal:2376) 6052001 1.780097
    03/01/2015 19:31:56.831 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:31:57.224 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:31:57.281 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:31:57.405 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:31:58.507 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:31:58.541 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:31:59.160 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:32:09.891 com.apple.InputMethodKit.UserDictionary[12456]: -[PFUbiquitySetupAssistant canReadFromUbiquityRootLocation:](1492): CoreData: Ubiquity:  Error attempting to read ubiquity root url: file:///Users/Duncan/Library/Mobile%20Documents/com~apple~TextInput/Dictionarie s/.
    Error: Error Domain=NSCocoaErrorDomain Code=134323 "The operation couldn’t be completed. (Cocoa error 134323.)" UserInfo=0x7f90e152a380 {NSAffectedObjectsErrorKey=<PFUbiquityLocation: 0x7f90e152a2e0>: /Users/Duncan/Library/Mobile Documents/com~apple~TextInput}
    userInfo: {
        NSAffectedObjectsErrorKey = "<PFUbiquityLocation: 0x7f90e152a2e0>: /Users/Duncan/Library/Mobile Documents/com~apple~TextInput";
    03/01/2015 19:32:12.862 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:32:20.818 com.apple.SecurityServer[53]: Killing auth hosts
    03/01/2015 19:32:20.819 com.apple.SecurityServer[53]: Session 100281 destroyed
    03/01/2015 19:32:20.974 com.apple.SecurityServer[53]: Session 100577 created
    03/01/2015 19:32:34.833 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:32:35.767 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:32:35.789 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:32:35.980 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:32:36.273 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:32:36.487 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:32:47.035 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
    03/01/2015 19:32:50.555 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 4.52 seconds (server forcibly re-enabled them after 1.00 seconds)
    03/01/2015 19:32:51.578 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
    03/01/2015 19:32:52.745 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 2.17 seconds (server forcibly re-enabled them after 1.00 seconds)
    03/01/2015 19:32:53.098 mds[32]: (DiskStore.Normal:2376) 6052001 1.279589
    03/01/2015 19:33:03.332 BezelServices 245.23[7816]: ASSERTION FAILED: dvcAddrRef != ((void *)0) -[DriverServices getDeviceAddress:] line: 2602
    03/01/2015 19:33:03.332 BezelServices 245.23[7816]: ASSERTION FAILED: dvcAddrRef != ((void *)0) -[DriverServices getDeviceAddress:] line: 2602
    03/01/2015 19:33:11.961 mds[32]: (DiskStore.Normal:2376) 6052001 1.939804
    03/01/2015 19:33:29.541 WindowServer[7810]: disable_update_timeout: UI updates were forcibly disabled by application "iPhoto" for over 1.00 seconds. Server has re-enabled them.
    03/01/2015 19:33:32.354 WindowServer[7810]: common_reenable_update: UI updates were finally reenabled by application "iPhoto" after 3.81 seconds (server forcibly re-enabled them after 1.00 seconds)
    03/01/2015 19:33:44.549 discoveryd[49]: Basic DNSResolver  dropping message because it doesn't match the one sent Port:53 MsgID:20602
    03/01/2015 19:33:54.055 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:34:05.143 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.143 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.165 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.166 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:05.499 discoveryd[49]: WCFNameResolvesToAddr: called
    03/01/2015 19:34:05.499 discoveryd[49]: WCFNameResolvesToAddr: entering
    03/01/2015 19:34:07.001 discoveryd[49]: Basic Sockets SetDelegatePID() failed for PID[3491] errno[3] result[-1]
    03/01/2015 19:34:13.809 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:34:13.821 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:34:14.124 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:34:14.766 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:34:14.867 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)
    03/01/2015 19:34:17.769 WindowServer[7810]: scrollPhase Ended (4), but previous was 4 (not Began)
    03/01/2015 19:34:18.186 WindowServer[7810]: scrollPhase MayBegin (128), but previous was 1 (not 0, Cancelled or End)
    03/01/2015 19:34:18.186 WindowServer[7810]: scrollPhase Began (1), but previous was 1 (not 0, MayBegin or End)

  • Safari hangs and poor performance in MBPR (Mid 2012)

    Safari hangs and poor performance in MBPR (Mid 2012)? OS X 10.10.2 is up to date

    Please answer as many of the following questions as you can. You may already have answered some of them. In that case, there's no need to repeat the answers.
    Back up all data before making any changes.
    Have you restarted your router and your broadband device (if they're separate) since you first noticed the problem? If not, do that now and see whether there's any change.
    If your browser is Safari, then from the Safari menu bar, select
              Safari ▹ Preferences... ▹ Privacy ▹ Remove All Website Data
    and confirm. If the Downloads button (with the icon of a downward-pointing arrow) is showing in the toolbar, click it and then click Clear in the box that appears. The download history will be removed. Any change?
    If you're running OS X 10.9 or later, select the Advanced tab in the Preferences window and uncheck the box marked
              Stop plug-ins to save power
    Any change?
    Quit and relaunch the browser. Any change?
    Enable guest logins* and log in as Guest. Don't use the Safari-only “Guest User” login created by “Find My Mac.”
    While logged in as Guest, you won’t have access to any of your documents or settings. Applications will behave as if you were running them for the first time. Don’t be alarmed by this behavior; it’s normal. If you need any passwords or other personal data in order to complete the test, memorize, print, or write them down before you begin.
    Test while logged in as Guest. Same problem?
    After testing, log out of the guest account and, in your own account, disable it if you wish. Any files you created in the guest account will be deleted automatically when you log out of it.
    *Note: If you’ve activated “Find My Mac” or FileVault, then you can’t enable the Guest account. The “Guest User” login created by “Find My Mac” is not the same. Create a new account in which to test, and delete it, including its home folder, after testing.
    Are any other web browsers installed, and are they the same? What about other Internet applications, such as iTunes and the App Store?
    If other browsers and Internet applications are also affected, follow these instructions and test. Any change?
    If Parental Controls is active for any user, please turn it off and test. Any change?
    If only Safari is affected, launch the Activity Monitor application and enter "web" (without the quotes) in the search box. If a process named "Safari Web Content" is shown in red or is using more than about 5% of a CPU, select it and force it to quit by clicking the X or Quit Process button in the toolbar of the window. There may be more than one such process. Any improvement?
    Follow the instructions in this support article. Any change?
    Open the iCloud preference pane and uncheck the box marked Photos, if it's checked. Any change?
    Are there any other devices on the same network that can browse the Web, and are they affected?
    If you can test Safari on another network, is it the same there?
    If you connect to your router with Wi-Fi and you can also connect with Ethernet, do that and turn off Wi-Fi. Any difference?

  • Crashing and poor performance during playback of a large project.

    Hi,
    I've been a regular user of iMovie for about 3 years and have edited several 50GB+ projects of DV quality footage without too many major issues with lag or 'dropped frames'. I currently have a 80GB project that resides on a 95% full 320GB Firewire 400 external drive that has been getting very slow to open and near impossible to work with.
    Pair the bursting-at-the-seams external drive, with an overburdened 90% full internal drive - the poor performance wasn't to be unexpected. So I bought a 1TB Firewire 400 drive to free up some space on my Mac. My large iTunes library (150GB) was the main culprit and it was quickly moved onto the new drive.
    The iMovie project was then moved onto my Mac's movie folder - I figured that the project needs some "room" to work (not that I really understand how Macs use memory) and that having roughly 80GB free with 1.5GB RAM (which is more than used to have) would make everything just that much smoother.
    Wrong.
    The project opened in roughly the same amount of time - but when I tried to play back the timeline, it plays like rubbish and then after 10-15 secs the Mac goes into 'sleep' mode. The screen goes off, the fans dies down and the 'heartbeat' light goes on. A click of the mouse 'wakes' the Mac only to find that if i try again, I get the same result.
    I've probably got so many variables going on here that it's probably hard to suggest what the problem might be but all I could think of was repairing permissions (which I did and none needed it).
    Stuck on this. Anyone have any advice?

    I understand completely, having worked with a 100 GB project once. I found that getting a movie bloated up to that size was just more difficult with jerky playback.
    I do have a couple of suggestions for you.
    You may need more than that 80GB free space for this movie. Is there any reason you cannot move it to the 1TB drive? If you have only your iTunes on it, you should have about 800 GB free.
    If you still need to have the project on your computer's drive, set your computer to never sleep.
    How close to finishing editing are you with this movie? If you are nearly done except for adding audio clips, you can export (share) it as QuickTime Full Quality movie. The resultant quicktime version of your iMovie will be smaller because it will contain only the clips actually used in the movie, not all the saved whole clips that iMovie keeps as its nondestructive editing feature. The quicktime movie will be one continuous clip, incorporating all your edits and added audio. It CAN be further edited, but you cannot change text of titles already there, or change transitions or remove already added audios.
    I actually do this with nearly every iMovie. I create my movies by first importing videos, then adding still photos, then editing with titles, effects and transitions. I add audio last, and if it becomes too distorted in playback, I export the movie and then continue adding audio clips.
    My 100+ GB movie slimmed down to only 8 GB with this method. (The large size was due to having so many clips. The movie was from VHS footage of my son's little league all-star game, and the video had so many skipped segments that I had to split it into thousands of clips to remove the dropped ones. Very old VHS tape!).
    I haven't upgraded to QT 7.5.5, but I heard that the jerky playback issue is mostly resolved with this new upgrade. I am in mid-project with about 5 iMovies, so I will probably plod along with my work-around method, not wanting to upgrade in the middle of any of them.
    Hope this is helpful to you.

  • Poor performance on reports that were migrated from 6i to 10g

    We are migrating from 6i client server to 10g reports server and getting poor performance on various reports. Reports that work in seconds in 6i are taking much longer to run or even timing out.
    Reports Server:
    Version 10.1.2.0.2
    initEngine = 1
    maxEngine = 20
    minEngine = 1
    engLife = 1
    engLife = 1
    maxIdle = 30
    The reports are being called from 10g forms with the following:
    T_repstr := '../reports/rwservlet?server=rep_aporaapp_frhome1'
    || '&report='|| T_prog_name
    || '&userid='|| T_nds_uid;
    || '&destype=cache'
    || '&paramform=yes'
    || '&mode=Default'
    || '&desformat=pdf'
    ||' orientation=Landscape';
    web.show_document(T_repstr,'_blank');

    Using these and not hearing much bad
    Init Engine 1
    Max Engine 6
    Min Engine 0
    Eng Life 10
    MaxIdle 30
    Trace Error
    Trace Replace
    I set my Report Server Parameters
    CACHE SIZE - 700
    CACHE DIRECTORY = (you have to decide)
    IDLE timeout 120
    Max Connections 120
    Max Queue Size 4000
    trace options = trace_err
    trace mode trace_replace

  • Poor Performance in ETL SCD Load

    Hi gurus,
    We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
    Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
    Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
    The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
    The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
    Some questions for the above:
    - is this an expected average execution duration for this number of records?
    - updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
    Thanks in advance,
    Guilherme

    Hi,
    Thank you for posting in Windows Server Forum.
    Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
    RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
    http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
    Hope it helps!
    Thanks.
    Dharmesh Solanki
    TechNet Community Support

Maybe you are looking for