Intermedia text search poor performance

We have a serious problem with our application running on Oracle 8i (8.1.5) with intermediaText. After analyzing the datatables the execution-plans are ok and a query (contains query combined with relational query) returns after 5 seconds.
When we insert a few records (about 5) the same query becomes extremly slow (returns after 10-30 minutes). The optimizer does not use the domain-index anymore and initiates a full table scan (200000 rows of long textdata). When the table is analyzed again the execution-plan is ok.
The relational part of the query selects on a number-value (timestamp) and we think the cause of this problem could be, that the values of the inserted rows are higher than the max-value at analyze-time.
The oracle-support in Germany could not help us. Does anybody know, how the problem can be resolved, or does anybody have the same problem? We must find a solution because our customers cannot analyze the tables 24 hours a day (performance).

I suggest upgrading to Oracle 8.1.6.
Intermedia in previous releases seems to
be very buggy.
You said Oracle support could not help -
why are we paying for support ??
<BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by multicom:
We have a serious problem with our application running on Oracle 8i (8.1.5) with intermediaText. After analyzing the datatables the execution-plans are ok and a query (contains query combined with relational query) returns after 5 seconds.
When we insert a few records (about 5) the same query becomes extremly slow (returns after 10-30 minutes). The optimizer does not use the domain-index anymore and initiates a full table scan (200000 rows of long textdata). When the table is analyzed again the execution-plan is ok.
The relational part of the query selects on a number-value (timestamp) and we think the cause of this problem could be, that the values of the inserted rows are higher than the max-value at analyze-time.
The oracle-support in Germany could not help us. Does anybody know, how the problem can be resolved, or does anybody have the same problem? We must find a solution because our customers cannot analyze the tables 24 hours a day (performance).<HR></BLOCKQUOTE>
null

Similar Messages

  • Intermedia text search error for file system

    I would like to search a text from a file. store in the file system. I have done the following procedures but when i create i get error.
    BEGIN
    CTX_DDL.CREATE_PREFERENCE('search_docroot_pref','FILE_DATASTORE');
    CTX_DDL.SET_ATTRIBUTE('search_docroot_pref','path','c:/temp/abc');
    END;
    Now when i create INDEX with following syntex
    CREATE INDEX mysearch_ind ON mytable(mycolumn) INDEXTYPE IS
    CTXSYS.context parameters('datastore search_docroot_pref');
    I get the following errors.
    ERROR at line 1:
    ORA-29855: error occurred in the execution of ODCIINDEXCREATE routine
    ORA-20000: interMedia Text error:
    DRG-50704: Net8 listener is not running or cannot start external procedures
    ORA-28575: unable to open RPC connection to external procedure agent
    ORA-06512: at "CTXSYS.DRUE", line 126
    ORA-06512: at "CTXSYS.TEXTINDEXMETHODS", line 54
    ORA-06512: at line 1
    Can any body tell me where i am wrong.
    Thanks,

    Hi
    I was aslo facing same problem.My net8 connection and listner is aslo ok. but getting same errors.
    Raju

  • InterMedia Text Search results

    We have interMedia Text enabled on our 3.0.9.8.2 install and, for Search Results, I need to understand how the percentage score is calculated and, if results have the same score, how the order is determined. Is it possible to reorder the results because it appears to be random? Can anyone point me to the relevant documentation?

    So Stephen, if I understand correctly, you have URL items, where the URL references the content in the IFS repository. This means that these items are being indexed by the WWSBR_URL_CTX_INDX. It'd be interesting to see if the results are returned as you would expect them to be using this index alone. This will tell you if there is a problem with Intermedia Text or if the problem lies with the way that Portal is using Intermedia Text and combining the scores from the multiple indexes.
    Try running a SQL query like
    select i.title, score(10) scr from wwv_things i, wwsbr_url$ u
    where i.id=u.object_id
    and contains (u.absolute_url,'${oracle}',10) > 0
    order by scr desc
    This will match your search term against only text which was indexed by the URL index and returns the display name of the corresponding item. Here I've searched for the term 'oracle'; change this to whatever you were using in your testcase. Note also that the $ implies the stem operator which is used by default in Portal.
    If this query doesn't return you the scores that you were expecting then the problem is to do with the way that Intermedia Text is indexing your PDF documents. If it does return what you'd expect then we need to look more closely at what Portal is doing with the scores.

  • InterMedia Text Search

    I see interMedia Search is searching only on Content Areas.
    What can I do to make it search in Portal Pages instead?
    We do not use Content Areas. We are using HTML Portlets in
    Portal Pages to publish content.

    Intermedia search is done in the backend and its upto you how you display the in the Web, whether it be a portal or any other application. The main issue is thatyou have to bulid index on the column of the table where you are storing the information. You have to query the database to get the results and display accordingly in the portal.
    Regarding the Dates you again have to limit your sql query between certain dates that should be present in your database.
    It all depends on how you display the search results in the client side. Normally you return a primary key of the table where you have indexed the column for search and referencing the primary key you can get the information of the corresponding row and also the other rows in other table if the primary key has any references to other tables.]
    I don't know how you are displaying the results in the portal. Are you using any framework such as the portlet of WebLogic Presonaliztion server?
    Thanks

  • Intermedia text search in XML document stored as CLOB

    Suppose i store an XML document as clob
    and i index it based on the tags ie
    if i have the followind table
    create table biodata (
    resume_id number primary key,
    content clob default empty_clob(),
    applicant_id number references
    applicants(id));
    and i stores various resumes in the clob (content column ) as xml document ie
    <experience_in_months> 22 </experience_in_months>
    Now if i want all the resume_id that that have experiences_in_months >= 10
    what would be the query that would fetch me this result?

    <BLOCKQUOTE><font size="1" face="Verdana, Arial">quote:</font><HR>Originally posted by voron:
    We are storing data in XML format in an Oracle database (via CLOB). I can retrieve search results using the 'within' phrase, but am finding it hard to order them. Is there a utility or command I can use?<HR></BLOCKQUOTE>
    Maybe you should write a stored procedure (function )
    that extracts the section from your clob
    that should be sorted. Then use this function
    in the ORDER clause of your select statement.
    Andreas
    null

  • Intermedia Text Search criteria

    We have a index which contains data in the format 'PG/A/1/2001' where 'A' can be any letter between 'a' and 'Z'. When we search the index using the contains statement we get strange results from the characters 'A' and 'S' only.
    i.e if I use contains(wordindex,'PG/A/1/2001',0) > 0 I get 'PG/B', 'PB/C', 'PC/D' etc. If I use contains(wordindex,'PG/B/1/2001',0) > 0 I only get back 'PG/B'.
    Does anyone know if oracle 8.1.7 is using 'PG' or '/' as special characters in some way, there is no mention of this in the manual and using {} round the statement make no difference.

    You are probably using the default English or American stoplist, on which "A" and "S" are stopwords.
    (why S? consider the word Garrett's). A stopword in a query acts like a %, so a query for
    PG/A means "PG" followed by any word. This is why it hits PG/A, PG/B, etc.
    if your data is this kind of code instead of really english, then you should create your index with the
    empty stoplist:
    create index ... parameters ('stoplist ctxsys.empty_stoplist')

  • Bug: Text Search is not working for Excel spreadsheets

    Hi,
    We have published several Excel, Word and PowerPoint file items to our portal application content area.
    However, The intermedia text search is not returning the excel files - it is working on other file formats though!
    I have even tried directly issueing select from sqlplus. eg.,:
    select name,filename from wwv_document$ where name like '%.XLS'
    AND CONTAINS(BLOB_CONTENT,'network')>0;
    Result>> No rows selected.
    We are using Portal3.0.9 and 8.1.7 DB.
    Thanks in advance!
    Ram

    Firstly, are you syncronizing your indexes. Newly added content is not searchable until the indexes have been synced. See the Portal configuration guide for how to do this.
    Are your Excel files being indexed successfull? After indexing, are there any entries for the WWSBR_DOC_CTX_INDX in the ctx_user_index_errors view. You should be able to join via the rowid to the wwdoc_document$ table to see which docs are failing if any are.

  • How fast text search field in Oracle without using Intermedia?

    How fast text search field in Oracle without using Intermedia? Thank you, Paul.

    yes,it is overriden in VOImpl
    public void executeQuery()
            setQuery((new StringBuilder()).append(selectStmt).append(" order by ").append(getOrderByClause()).toString());
            OAApplicationModuleImpl oaapplicationmoduleimpl = (OAApplicationModuleImpl)getApplicationModule();
            OAApplicationModuleImpl _tmp = oaapplicationmoduleimpl;
            if(oaapplicationmoduleimpl.isLoggingEnabled(1))
                OAApplicationModuleImpl _tmp1 = oaapplicationmoduleimpl;
                oaapplicationmoduleimpl.writeDiagnostics((new StringBuilder()).append(getClass().getName()).append(".executeQuery").toString(), (new StringBuilder()).append(" Query:").append(getQuery()).toString(), 1);
            super.executeQuery();
    But I have extended VO and substituted the VO . In the substituted VOImpl, instead of executeQuery(),I have written
    public void customExecuteQuery()
              setQuery((new StringBuilder()).append(selectStmt).append(" order by ").append(getOrderByClause()).toString());
              executeQuery();
    Will this work,or do I need to do any changes?
    Thanks,

  • Poor performance of the BDB cache

    I'm experiencing incredibly poor performance of the BDB cache and wanted to share my experience, in case anybody has any suggestions.
    Overview
    Stone Steps maintains a fork of a web log analysis tool - the Webalizer (http://www.stonesteps.ca/projects/webalizer/). One of the problems with the Webalizer is that it maintains all data (i.e. URLs, search strings, IP addresses, etc) in memory, which puts a cap on the maximum size of the data set that can be analyzed. Naturally, BDB was picked as the fastest database to maintain analyzed data on disk set and produce reports by querying the database. Unfortunately, once the database grows beyond the cache size, overall performance goes down the drain.
    Note that the version of SSW available for download does not support BDB in the way described below. I can make the source available for you, however, if you find your own large log files to analyze.
    The Database
    Stone Steps Webalizer (SSW) is a command-line utility and needs to preserve all intermediate data for the month on disk. The original approach was to use a plain-text file (webalizer.current, for those who know anything about SSW). The BDB database that replaced this plain text file consists of the following databases:
    sequences (maintains record IDs for all other tables)
    urls -primary database containing URL data - record ID (key), URL itself, grouped data, such as number of hits, transfer size, etc)
    urls.values - secondary database that contains a hash of the URL (key) and the record ID linking it to the primary database; this database is used for value lookups)
    urls.hits - secondary database that contains the number of hits for each URL (key) and the record ID to link it to the primary database; this database is used to order URLs in the report by the number of hits.
    The remaining databases are here just to indicate the database structure. They are the same in nature as the two described above. The legend is as follows: (s) will indicate a secondary database, (p) - primary database, (sf) - filtered secondary database (using DB_DONOTINDEX).
    urls.xfer (s), urls.entry (s), urls.exit (s), urls.groups.hits (sf), urls.groups.xfer (sf)
    hosts (p), hosts.values (s), hosts.hits (s), hosts.xfer (s), hosts.groups.hits (sf), hosts.groups.xfer (sf)
    downloads (p), downloads.values (s), downloads.xfer (s)
    agents (p), agents.values (s), agents.values (s), agents.hits (s), agents.visits (s), agents.groups.visits (sf)
    referrers (p), referrers.values (s), referrers.values (s), referrers.hits (s), referrers.groups.hits (sf)
    search (p), search.values (s), search.hits (s)
    users (p), users.values (s), users.hits (s), users.groups.hits (sf)
    errors (p), errors.values (s), errors.hits (s)
    dhosts (p), dhosts.values (s)
    statuscodes (HTTP status codes)
    totals.daily (31 days)
    totals.hourly (24 hours)
    totals (one record)
    countries (a couple of hundred countries)
    system (one record)
    visits.active (active visits - variable length)
    downloads.active (active downloads - variable length)
    All these databases (49 of them) are maintained in a single file. Maintaining a single database file is a requirement, so that the entire database for the month can be renamed, backed up and used to produce reports on demand.
    Database Size
    One of the sample Squid logs I received from a user contains 4.4M records and is about 800MB in size. The resulting database is 625MB in size. Note that there is no duplication of text data - only nodes and such values as hits and transfer sizes are duplicated. Each record also contains some small overhead (record version for upgrades, etc).
    Here are the sizes of the URL databases (other URL secondary databases are similar to urls.hits described below):
    urls (p):
    8192 Underlying database page size
    2031 Overflow key/data size
    1471636 Number of unique keys in the tree
    1471636 Number of data items in the tree
    193 Number of tree internal pages
    577738 Number of bytes free in tree internal pages (63% ff)
    55312 Number of tree leaf pages
    145M Number of bytes free in tree leaf pages (67% ff)
    2620 Number of tree overflow pages
    16M Number of bytes free in tree overflow pages (25% ff)
    urls.hits (s)
    8192 Underlying database page size
    2031 Overflow key/data size
    2 Number of levels in the tree
    823 Number of unique keys in the tree
    1471636 Number of data items in the tree
    31 Number of tree internal pages
    201970 Number of bytes free in tree internal pages (20% ff)
    45 Number of tree leaf pages
    243550 Number of bytes free in tree leaf pages (33% ff)
    2814 Number of tree duplicate pages
    8360024 Number of bytes free in tree duplicate pages (63% ff)
    0 Number of tree overflow pages
    The Testbed
    I'm running all these tests using the latest BDB (v4.6) built from the source on Win2K3 server (release version). The test machine is 1.7GHz P4 with 1GB of RAM and an IDE hard drive. Not the fastest machine, but it was able to handle a log file like described before at a speed of 20K records/sec.
    BDB is configured in a single file in a BDB environment, using private memory, since only one process ever has access to the database).
    I ran a performance monitor while running SSW, capturing private bytes, disk read/write I/O, system cache size, etc.
    I also used a code profiler to analyze SSW and BDB performance.
    The Problem
    Small log files, such as 100MB, can be processed in no time - BDB handles them really well. However, once the entire BDB cache is filled up, the machine goes into some weird state and can sit in this state for hours and hours before completing the analysis.
    Another problem is that traversing large primary or secondary databases is a really slow and painful process. It is really not that much data!
    Overall, the 20K rec/sec quoted above drop down to 2K rec/sec. And that's all after most of the analysis has been done, just trying to save the database.
    The Tests
    SSW runs in two modes, memory mode and database mode. In memory mode, all data is kept in memory in SSW's own hash tables and then saved to BDB at the end of each run.
    In memory mode, the entire BDB is dumped to disk at the end of the run. First, it runs fairly fast, until the BDB cache is filled up. Then writing (disk I/O) goes at a snail pace, at about 3.5MB/sec, even though this disk can write at about 12-15MB/sec.
    Another problem is that the OS cache gets filled up, chewing through all available memory long before completion. In order to deal with this problem, I disabled the system cache using the DB_DIRECT_DB/LOG options. I could see OS cache left alone, but once BDB cache was filed up, processing speed was as good as stopped.
    Then I flipped options and used DB_DSYNC_DB/LOG options to disable OS disk buffering. This improved overall performance and even though OS cache was filling up, it was being flushed as well and, eventually, SSW finished processing this log, sporting 2K rec/sec. At least it finished, though - other combinations of these options lead to never-ending tests.
    In the database mode, stale data is put into BDB after processing every N records (e.g. 300K rec). In this mode, BDB behaves similarly - until the cache is filled up, the performance is somewhat decent, but then the story repeats.
    Some of the other things I tried/observed:
    * I tried to experiment with the trickle option. In all honesty, I hoped that this would be the solution to my problems - trickle some, make sure it's on disk and then continue. Well, trickling was pretty much useless and didn't make any positive impact.
    * I disabled threading support, which gave me some performance boost during regular value lookups throughout the test run, but it didn't help either.
    * I experimented with page size, ranging them from the default 8K to 64K. Using large pages helped a bit, but as soon as the BDB cached filled up, the story repeated.
    * The Db.put method, which was called 73557 times while profiling saving the database at the end, took 281 seconds. Interestingly enough, this method called ReadFile function (Win32) 20000 times, which took 258 seconds. The majority of the Db.put time was wasted on looking up records that were being updated! These lookups seem to be the true problem here.
    * I tried libHoard - it usually provides better performance, even in a single-threaded process, but libHoard didn't help much in this case.

    I have been able to improve processing speed up to
    6-8 times with these two techniques:
    1. A separate trickle thread was created that would
    periodically call DbEnv::memp_trickle. This works
    especially good on multicore machines, but also
    speeds things up a bit on single CPU boxes. This
    alone improved speed from 2K rec/sec to about 4K
    rec/sec.Hello Stone,
    I am facing a similar problem, and I too hope to resolve the same with memp_trickle. I had these queries.
    1. what was the % of clean pages that you specified?
    2. What duration were you clling this thread to call memp_trickle?
    This would give me a rough idea about which to tune my app. Would really appreciate if you can answer these queries.
    Regards,
    Nishith.
    >
    2. Maintaining multiple secondary databases in real
    time proved to be the bottleneck. The code was
    changed to create secondary databases at the end of
    the run (calling Db::associate with the DB_CREATE
    flag), right before the reports are generated, which
    use these secondary databases. This improved speed
    from 4K rec/sec to 14K rec/sec.

  • Kazehakase with full-text search in history using Hyper Estraier

    A guide for Kazehakase with full-text history search using Hyper Estraier
    I adopted qdbm and submitted hyperestraier in AUR, so you can enable full-text search frature by installing Hyper Estraier from AUR and rebuilding Kazehakase using srcpac or yaourt.
    1. Install QDBM and Hyper Estraier from AUR. The easiest way is using yaourt. (If you prefer not to use yaourt, download tarball and do makepkg && pacman -U manually.)
    yaourt -S qdbm hyperestraier
    2. Rebuild Kazehakase using srcpac.
    srcpac -Sb kazehakase
    Of cource you can rebuild Kazehakase using yaourt.
    yaourt -Sb kazehakase
    You don't have to modify configure option in PKGBUILD of Kazehakase, because "--enable-hyper-estraier" is implied by default. If Hyper Estraier is installed successfully, you'll get "Hyper Estraier: yes" in configure messages.
    3. Configure Kazehakase. To enable full-text search in history, run Kazehakase and go Edit>Preference>General and change UI Level to "Expert" and apply settings. Next, go Edit>Preference>History and set Search engine name to "hyper-estraier" and restart Kazehakase. Then you'll see "History Search" box next to "Internet Search" box.
    Sorry for my poor English.

    "ctxsrv" is no longer supported at version 10.1.
    Instead PARAMETERS clause has SYNC option, and you can specify ON COMMIT for this.
    If you created the database with DBCA and chose Oracle Text option, then you have no need to perform any further operations to configure Oracle Text.

  • How can I use ONE Text search iView to event/affect mutliple Result Sets?

    hello everyone,
    i have a special situation in which i have 6 flat tables in my repository which all have a common field called Location ID (which is a lookup flat to the Locations table).
    i am trying to build a page with a free-form text search iView on Table #1 (search field = Location ID).  when I execute the search, the result set for Table #1 is properly updated, but how do I also get Result Set iViews for Tables #2-6 to also react to the event from Text Search for Table #1 so that they are updated?
    i don't want to have to build 6 different text search iViews (one for each table).  i just want to use ONE text search iView for all the different result set tables.  but, in the documentation and iView properties, the text search iView doesn't have any eventing.
    if you have any suggestions, please help.
    many thanks in advance,
    mm

    hello Donna,
    that should not be a problem, since you are detailw with result sets and detail iviews because custom eventing can be defined for those iviews.
    Yes, it says "no records" found because an active search and record selection havent' been performed for it (only your main table does).
    So, yes, define a custom event, and pass the appropriate parameters and you should be fine.
    Creating a custom event between a Result Set iView and an Item Details iView is easy and works. I have done it.
    See page 35 of the Portal Content Development Guide for a step-by-step example, which is what I used.
    For my particular situation, the problem I'm having is that I want the Search Text iView's event (i.e., when the Submit button is pressed) to be published to multiple iViews, all with different tables.  Those tables all share some common fields, which is what the Search iView has, so I'd like to pass the search critera to all of the iViews.
    -mm

  • How do I use XSLT & XML is stored in InterMedia Text.....

    I use interMedia Text to store XML document. How do I use the XSLT Processor API to transform the data which is searched by XML SQL Utility??
    //***Source Code
    public Document xmlquery(String tabName,String xslfilename)
    Document xmlDocToReturn = null;
    String xmlString;
    try
    DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver());
    //initiate a JDBC connection
    // initialize the OracleXMLQuery
    OracleXMLQuery qry = new OracleXMLQuery(conn,"select XML_TEXT from bookstore where contains (xml_text,'John WITHIN authorsec')>0");
    // structure the generated XML document
    qry.setMaxRows(2);
    // set the maximum number of rows to be returned
    // get the XML document in string format
    xmlString = qry.getXMLString();
    // print out the XML document
    System.out.println(" OUTPUT IS:\n"+xmlString);
    // get the XML document in string format
    xmlDocToReturn = qry.getXMLDOM();
    conn.close();
    catch (SQLException e) {
    return xmlDocToReturn;
    xml = (XMLDocument)query.xmlquery(args[1],args[0]);
    // Instantiate the stylesheet
    XSLStylesheet xsl = new XSLStylesheet(xsldoc, xslURL);
    XSLProcessor processor = new XSLProcessor();
    // Display any warnings that may occur
    processor.showWarnings(true);
    processor.setErrorStream(System.err);
    // Process XSL
    DocumentFragment result = processor.processXSL(xsl, xml);
    Thank you.
    null

    Your problem here is that when you store an XML document in a CLOB and search it using intermedia, when you do a query like:
    SELECT xml_text
    FROM bookstore
    WHERE CONTAINS(xml_text,'John WITHIN authorsec')>0
    The output from the XML SQL Utility using getXMLDOM() looks like this:<ROWSET>
    <ROW>
    <XML_TEXT><![CDATA[
    <bookstuff>
    <authorsec>
    <name>Steve</name>
    </authorsec>
    etc.
    </bookstuff>
    ]]>
    </XML_TEXT>
    </ROW>
    </ROWSET>with the document as a single text value (it's actually just a text node, not a CDATA node) but the above illustrates conceptually that the whole XML document is one big text node.
    To transform this you'll need to parse that XML text into an XML document in memory by passing constructing a StringReader() on the text value and parsing that reader.
    null

  • Safari hangs and poor performance in MBPR (Mid 2012)

    Safari hangs and poor performance in MBPR (Mid 2012)? OS X 10.10.2 is up to date

    Please answer as many of the following questions as you can. You may already have answered some of them. In that case, there's no need to repeat the answers.
    Back up all data before making any changes.
    Have you restarted your router and your broadband device (if they're separate) since you first noticed the problem? If not, do that now and see whether there's any change.
    If your browser is Safari, then from the Safari menu bar, select
              Safari ▹ Preferences... ▹ Privacy ▹ Remove All Website Data
    and confirm. If the Downloads button (with the icon of a downward-pointing arrow) is showing in the toolbar, click it and then click Clear in the box that appears. The download history will be removed. Any change?
    If you're running OS X 10.9 or later, select the Advanced tab in the Preferences window and uncheck the box marked
              Stop plug-ins to save power
    Any change?
    Quit and relaunch the browser. Any change?
    Enable guest logins* and log in as Guest. Don't use the Safari-only “Guest User” login created by “Find My Mac.”
    While logged in as Guest, you won’t have access to any of your documents or settings. Applications will behave as if you were running them for the first time. Don’t be alarmed by this behavior; it’s normal. If you need any passwords or other personal data in order to complete the test, memorize, print, or write them down before you begin.
    Test while logged in as Guest. Same problem?
    After testing, log out of the guest account and, in your own account, disable it if you wish. Any files you created in the guest account will be deleted automatically when you log out of it.
    *Note: If you’ve activated “Find My Mac” or FileVault, then you can’t enable the Guest account. The “Guest User” login created by “Find My Mac” is not the same. Create a new account in which to test, and delete it, including its home folder, after testing.
    Are any other web browsers installed, and are they the same? What about other Internet applications, such as iTunes and the App Store?
    If other browsers and Internet applications are also affected, follow these instructions and test. Any change?
    If Parental Controls is active for any user, please turn it off and test. Any change?
    If only Safari is affected, launch the Activity Monitor application and enter "web" (without the quotes) in the search box. If a process named "Safari Web Content" is shown in red or is using more than about 5% of a CPU, select it and force it to quit by clicking the X or Quit Process button in the toolbar of the window. There may be more than one such process. Any improvement?
    Follow the instructions in this support article. Any change?
    Open the iCloud preference pane and uncheck the box marked Photos, if it's checked. Any change?
    Are there any other devices on the same network that can browse the Web, and are they affected?
    If you can test Safari on another network, is it the same there?
    If you connect to your router with Wi-Fi and you can also connect with Ethernet, do that and turn off Wi-Fi. Any difference?

  • Crashing and poor performance during playback of a large project.

    Hi,
    I've been a regular user of iMovie for about 3 years and have edited several 50GB+ projects of DV quality footage without too many major issues with lag or 'dropped frames'. I currently have a 80GB project that resides on a 95% full 320GB Firewire 400 external drive that has been getting very slow to open and near impossible to work with.
    Pair the bursting-at-the-seams external drive, with an overburdened 90% full internal drive - the poor performance wasn't to be unexpected. So I bought a 1TB Firewire 400 drive to free up some space on my Mac. My large iTunes library (150GB) was the main culprit and it was quickly moved onto the new drive.
    The iMovie project was then moved onto my Mac's movie folder - I figured that the project needs some "room" to work (not that I really understand how Macs use memory) and that having roughly 80GB free with 1.5GB RAM (which is more than used to have) would make everything just that much smoother.
    Wrong.
    The project opened in roughly the same amount of time - but when I tried to play back the timeline, it plays like rubbish and then after 10-15 secs the Mac goes into 'sleep' mode. The screen goes off, the fans dies down and the 'heartbeat' light goes on. A click of the mouse 'wakes' the Mac only to find that if i try again, I get the same result.
    I've probably got so many variables going on here that it's probably hard to suggest what the problem might be but all I could think of was repairing permissions (which I did and none needed it).
    Stuck on this. Anyone have any advice?

    I understand completely, having worked with a 100 GB project once. I found that getting a movie bloated up to that size was just more difficult with jerky playback.
    I do have a couple of suggestions for you.
    You may need more than that 80GB free space for this movie. Is there any reason you cannot move it to the 1TB drive? If you have only your iTunes on it, you should have about 800 GB free.
    If you still need to have the project on your computer's drive, set your computer to never sleep.
    How close to finishing editing are you with this movie? If you are nearly done except for adding audio clips, you can export (share) it as QuickTime Full Quality movie. The resultant quicktime version of your iMovie will be smaller because it will contain only the clips actually used in the movie, not all the saved whole clips that iMovie keeps as its nondestructive editing feature. The quicktime movie will be one continuous clip, incorporating all your edits and added audio. It CAN be further edited, but you cannot change text of titles already there, or change transitions or remove already added audios.
    I actually do this with nearly every iMovie. I create my movies by first importing videos, then adding still photos, then editing with titles, effects and transitions. I add audio last, and if it becomes too distorted in playback, I export the movie and then continue adding audio clips.
    My 100+ GB movie slimmed down to only 8 GB with this method. (The large size was due to having so many clips. The movie was from VHS footage of my son's little league all-star game, and the video had so many skipped segments that I had to split it into thousands of clips to remove the dropped ones. Very old VHS tape!).
    I haven't upgraded to QT 7.5.5, but I heard that the jerky playback issue is mostly resolved with this new upgrade. I am in mid-project with about 5 iMovies, so I will probably plod along with my work-around method, not wanting to upgrade in the middle of any of them.
    Hope this is helpful to you.

  • Skype crashing and poor performance

    Hello!
    I have a Lumia625 with WP8.1. My problem is that Skype has a really poor performance on my phone. It crashes 6 times out of 10 on startup, and even if I manage to start it, the whole app is slow and laggy. Sometimes I can't even write a message it's so laggy. Video call is absolutely out of the question. It crashes my whole phone. I have no similar problems with other instant messaging apps nor with high-end games. There is something obviously using way more resource in the Skype app than it's supposed to. It's a simple chat program, why would it need so much resource?
    The problem seems to be originating from the lower (512 mb) RAM size of my phone model, because I experienced the same effect with poorly written apps, that don't keep in mind that there are 512 RAM devices, not only 1GB+ ones, and use too much resource.
    Please don't try to suggest to restart/reset the phone, and reinstall the app. Those are already behind me, and they did NOT help the problem. I'm not searching for temporary workarounds.
    Please find a solution for this problem, because it is super annoying, and I can't use Skype, which will eventually result in me leaving Skype.
    Solved!
    Go to Solution.

    When it crashes on startup it goes like:
    I tap the skype tile
    The black screen with the "Loading....." appears (default WP loading screen). Usually this takes longer than it would normally take on any other app.
    For a blink of an eye the Skype gui appears, but it instantly crashes.
    If I can successfully start up the app, it just keeps lagging. I sart to write a message to a contact, and sometimes even the letters don't appear as I touch them, but they appear much later altogether. If I tap the send message button the whole gui freezes (seems like it freezes till the contact gets my message). Sometimes the lag get stronger, and sometimes it almost vanishes, but if I keep making inputs when the lag is strong, sometimes it crashes the whole app.
    When I first installed the app, everything was fine. But after a while this behavior appeared. I reinstalled the app, and it solved the problem temporarily, but after some time the problem re-appeared. I don't know if it's relevant, but there was a time when I couldn't make myself appear online all the time (when the app was not started). In that time I didn't experience the lags and crashes. Anyways, what I'm sure about is that the lags get worse with time. Idk if it's because of use of the app (caching?), or the updates the phone makes to itself (conflict?).
    I will try to reinstall Skype. Probably it will fix it for now. I hope the problem won't appear again.

Maybe you are looking for

  • Missing Points / $10 certificate from Preorder

    Hi BestBuy Community, Has anyone receive their $10 pre order bonus from the Assasins Creed Unity? It's been well over 30 days now and I see that points were given but it does not reflect the current balance. I have private messaged a MOD but no respo

  • Aggregating output of two BS in OSB Split join

    Hi, i am using split join to invoke two BS and after the each invoke activity in parallel branch i have placed assign activity and assigning output variable of in voke to reply activity output variable response.parameter . However in o/p i am getting

  • Party-Relationships in Installed Base

    Hi, Can anybody highlight on purpose/uses/advantages of setting party relationships in Installed base. What can be acheived by creating these relationships? Regards, Mohammed

  • Problems updating/installing air application

    Hi,   We have an Adobe Air application that uses the Air update mechanism for newer versions. It seems to work fine for most of the customers  but on some computers when it tries to install the new version it cannot. The only clue we have are some er

  • Problems with Safari, Acrobat, Intel Macs, and Life Cycle Form Mgr.

    Here is my problem: A Mac user using 10.4.10, with Acrobat Standard Ed v.7.0.9 installed, is having to approve purchase forms through Adober Life Cycle Form Manager through Safari 2.0.4. When trying to approve the PDF forms, it tells her she needs to