Query on Memory consumption of an object

Hi,
I am able to get information on the number of instances loaded, the memory occupied by those instances using heap histogram.
Class      Instance Count      Total Size
class [C      10965      557404
class [B      2690      379634
class [S      3780      220838
class java.lang.String      10807      172912 Is there way to get detailed info like, String object of which class consume much memory.
In other words,
The memory consumption of String is 172912. can I have a split up like
String Objects of Class A - 10%
String Objects of Class B - 90%
Thanks

I don't know what profiler you are using but many memory profilers can tell you where the strings are allocated.

Similar Messages

  • Query on memory consumption during SQL

    Hi SAP Gurus,
    Could I kindly request for your inputs concerning the following scenario?
    To put it quite simply, we have a program where we're required to retrieve all the fields from a lengthy custom table, i.e. the select statement uses an asterisk.  Unfortunately, there isn't really a way to avoid this short of a total overhaul of the code, so we had to settle with this (for now).
    The program retrieves from the database table using a where clause filtering only to a single value company code.  Kindly note that company code is not the only key in the table.  In order to help with the memory consumption, the original developer had employed retrieval by packages (also note that the total length of each record is 1803...).
    The problem encountered is as follows:
    - Using company code A, retrieving for 700k entries in packages of 277, the program ran without any issues.
    - However, using company code B, retrieving for 1.8m in packages of 277, the program encountered a TSV_TNEW_PAGE_ALLOC_FAILED short dump.  This error is encountered at the very first time the program goes through the select statement, ergo it has not even been able to pass through any additional internal table processing yet.
    About the only biggest difference between the two company codes is the number of corresponding records they have in the table.  I've checked if company code B had more values in its columns than company code A.  However, they're just the same.
    What I do not quite understand is why memory consumption changed just by changing the company code in the selection.  I thought that the memory consumed by both company codes should be the same... at least, in the beginning, considering that we're retrieving by packages, so we're not trying to get all of the records all at once.  However, the fact that it failed at the very beginning has shown me that I'm gravely mistaken.
    Could someone please enlighten me on how memory is consumed during database retrieval?
    Thanks!

    Hi,
    with FAE (FOR ALL ENTRIES) the whole query even for a single record in the itab is executed and all results for
    the company code are transfered from the database to the DBI since the duplicates will be removed by the DBI
    not by the database.
    If you use package size the resultset is buffered in a system table in the DBI (which allocates memory from your user quota). And from there on the package sizes are built and handed over to your application (into table lt_temp).
    see recent ABAP documentation:
    Since duplicate rows are only removed on the application server, all rows specified using the WHERE condition are sometimes transferred to an internal system table and aggregated here. This system table has the same maximum size as the normal internal tables. The system table is always required if addition PACKAGE SIZE or UP TO n ROWS is used at the same time. These do not affect the amount of rows transferred from the database server to the application server; instead, they are used to transfer the rows from the system table to the actual target area.
    What you should do:
    calculate the size needed for your big company code B. How many row  multiplied with line length.
    That is the minimum amount you need for your user memory quota. (quotas can be checked with
    ABAP report RSMEMORY) If the amount of memory is sufficient then try without packagesize.
    SELECT * FROM <custom table>
    INTO TABLE lt_temp
    FOR ALL ENTRIES IN lt_bukrs
    WHERE bukrs = lt_bukrs-bukrs
    ORDER BY primary key.
    This might actually use less memory than the package size option for the FOR ALL ENTRIES.
    Since with FAE it is buffered anyway in the DBI (and subtracted from your quota) you can
    do it right away and avoid double saving portions (the DBI buffer and a portion of that in the
    packe in lt_temp).
    If the amount of memory is still too big, you have to either increase the quotas or select
    less data (additional where conditions) or avoid using FAE in this case in order to not read all
    the data in one go.
    Hope this helps,
    Hermann

  • Query memory consumption

    Hi,
    Need some expert in SQL here. May i know how much memory (RAM) consumption for a simple query like 'SELECT SUM(Balance) FROM OCRD' cost.
    What about query like
    select (select sum(doctotal) from ordr) + (select sum(doctotal) from odln) + (select sum(doctotal) from oinv)
    How much memory would it normally takes? The reason is that i have a query that is quite similar to this and it would be run quite often. So i wonder if it is feasible to use this type of queries withought making the server to a crawl.
    Please note that the real query would include JOINS and such. Thanks
    Any information is appreciated

    Hi Melvin,
    Not sure I'd call myself an expert but I'll have a go at an answer
    I think you are going to need to set up a test environment and then stress test your solution to see what happens. There are so many different variables that affect the memory consumption that no-one is likely to be able to say just what the impact will be on your server. SQL Server, by default will allocate 1024Kb to each query but, of course, quite a number of factors will affect whether SQL needs more memory than this to execute a particular query (e.g. the number of joins, the locks created, whether the data is grouped or sorted, the size of the data etc etc). Also, SQL will release memory as soon as it can (based on its own algorithms) so a query that is run periodically has much less impact on the server than a query that will be run concurrently by multiple users. For these reasons, the impact can only really be assessed if you test it in a real-world scenario.
    If you've ever seen SQL Server memory usage when XL Reporter is running a very large report then you'll know that this is a very memory hungry operation. XL Reporter bombards SQL with a huge number of separate little queries and SQL Server starts grabbing significant amounts of memory to fulfill these queries. As the queries are coming so fast, SQL hasn't yet got around to releasing the memory used by previous queries so SQL instead grabs available memory from the server.
    You'll get better performance and scaleability by using stored procedures but SDK certification does not allow the use of SPs in the SBO databases.
    Hope this helps,
    Owen

  • Dbxml memory consumption

    I have a query that returns about 10MB worth of data when run against my db -- it looks something like the following
    'for $doc in collection("VcObjStore")/doc
    where $doc[@type="Foo"]
    return <item>{$doc}</item>'
    when I run this query in dbxml.exe, I see memory footprint (of dbxml.exe) increase 125MB. Once query finishes, it comes back down.
    I expected memory consumption to be somewhat larger than what the query actually returns but this seems quite extreme.
    Is this behavior expected? What is a general rule of thumb on memory usage with respect to result size (is it really 10x)? Any way to make it less of a hog?
    Thanks

    Hi Ron,
    Thanks for a quick reply!
    - I wasn't actually benchmarking DBXML. We've observed large memory consumption during query execution in our test application and verified the same issue with dbxml.exe. Since dbxml.exe is well understood by everyone familiar with DBXML, I thought it would help starting with that.
    - Yes, an environment was created for this db. Here is the code we used to set it up
    EnvironmentConfig envConfig = new EnvironmentConfig();
    envConfig.setInitializeLocking(true);
    envConfig.setInitializeCache(true);
    envConfig.setAllowCreate(true);
    envConfig.setErrorStream(System.err);
    envConfig.setCacheSize(1024 * 1024 * 100);
    - I'd like an explanation on reasons behind the performance difference between these two queries
    Query 1:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return $doc'
    552 objects... <snip>
    Time in seconds for command 'query': 0.031
    Query 2:
    dbxml> time query 'for $doc in collection("VcObjStore")/doc
    where $doc[@type="VirtualMachine"]
    return <val>{$doc}</val>'
    552 objects... <snip>
    Time in seconds for command 'query': 5.797
    - Any way to make the query #2 go as fast as #1?
    Thanks!

  • How to query in memory on a subset?

    TopLink query in memory is quite an exclusive feature. However, it doesn't the obvious need for indexing. If anybody has exposure to .NET they will be aware of a paradimn shift of using SQL against structure in memory. I would like in my application to use TopLink query in memory to query against a specific list of persistent objects. I mean, I don't want TopLink iterate the 5000 instance of class X that is in the cache. I would like the following use cases be supported:
    - query against UOW registered object, excluding session cache
    - query against a specific list of registered objects
    Please provide existing API in TopLink 10.x and if only exist in EclipseLink, let me know the API.
    If API is missing, please create enhancement request and let me know the number.

    Below are the use cases so that you can easily derive API needed:
    1- Load all instances of class X, the query for specific instances with where condition other than PK
    - if loading is just one time, then query should check both uow/session, so already supported, but need index if lot of data.
    - another way to see this, is that for each class that doesn't have too much instances, we want to load everything in memory, then redirect all query against the class to be in memory.
    2- Batch method handling a range or subset. Like on a domain having many organizations. Start by getting all timesheets of organization X for last week from DB for optimization purpose. Then code start to classify/process the data, like query in memory for timesheets of this subset with state x, then y, z. This last query could just be against UOW, because just loaded the data. However, next query, which is againt only timesheet with state x, should ideally be against the list of timesheet with state x already populated in a collection. So having scope on UOW should be used only for performance, but having scope on a specific collection provide both specific result and performance. This use case may seems weird, but in complex/legacy application, it's often the case that we have existing not batch oriented code, which query for similar subset one after another which is not performant. Then faster refactoring to get performance, is to try to not modify existing not batch oriented query but redirect to memory when we are able to insure that memory is loaded with data needed before jumping on the not batch oriented legacy code.
    3- Simplification of iteration by moving from verbose Java to SQL. We do have lot of code having a Collection of a Map having a Map. Then iterate when looking at some state of the leaf item we still filter out stuff. Be able to query in memory against collection/set, should simplify our code.
    4- Remove duplicate SQLs. Complex application may end-up with different module calling same module to get a piece of information, so duplicate SQLs sent to DB if query is not against PK. Like select * from X where FK = y. If FK is unique, then I can avoid all duplicate SQLs to be sent to DB by looking first in UOW if any instances of object X exist with FK = y. In this case I want to look only in UOW for performance reason because index is not supported.

  • Memory consumption of queries in workbooks

    We have an issue with the exceution of a Workbook which contains several queries. The queries require very much memory which finally leads to a shortdump (TSV_TNEW_PAGE_ALLOC_FAILED). We found that during execution of the workbook the memory is not released after a query has been executed and therfore at some point of time the dump occurs. However, if the queries are refreshed manually one after the other in the workbook the memory is relaesed and finally the workbook can be executed by this workaround.
    My question is, if anyone has an idea, if it is possible to apply a setting somewhere that the queries relaese the memory after execution when they are all refreshed together in the workbook?
    Thanks a lot in advance for any hint & Kind regards,
    Hans-Jörg

    Hi,
    Try this,
    You may be able to workaround the problem by increasing free memory avaiable, parameter em/initial_size_MB (contact your Basis team or refer note 835474).
    Also concenrate on parameter ztta/roll_extension (Refer note 146289)
    Try increasing the parameter, abap/heap_area_dia from tcode RZ11.
    Also check the following notes in detail as well,
    649327     Analysis of memory consumption
    425207     SAP memory management, current parameter ranges
    369726     TSV_TNEW_PAGE_ALLOC_FAILED
    185185     Application: Analysis of memory bottlenecks
    If the issue persist, please review SAP Note 779123 and query design.
    check this,
    http://scn.sap.com/thread/288222
    http://www.sapfans.com/forums/viewtopic.php?f=3&t=109557
    regards,
    anand.

  • Memory Consumption: Start A Petition!

    I am using SQL Developer 4.0.0.13 Build MAIN 13.80.  I was praying that SQL Developer 4.0 would no longer use so much memory and, when doing so, slow to a crawl.  But that is not the case.
    Is there a way to start a "petition" to have the SQL Development team focus on the products memory usage?  This is problem has been there for years now with many posts and no real answer.
    If there isn't a place to start a "petition" let's do something here that Oracle will respond to.
    Thank you

    Yes, at this point (after restarting) SQL Developer is functioning fine.  Windows reports 1+ GB of free memory.  I have 3 worksheets open all connected to two different DB connections.  Each worksheet has 1 to 3 pinned query results.  My problem is that after working in SQL Developer for a a day or so with perhaps 10 worksheets open across 3 database connections and having queried large data sets and performing large exports it becomes unresponsive even after closing worksheets.  It appears like it does not clean up after itself to me.
    I will use Java VisualVM to compare memory consumption and see if it reports that SQL Developer is releasing memory but in the end I don't care about that.  I just need a responsive SQL Developer and if I need to close some worksheets at times I can understand doing so but at this time that does not help.

  • Portal Session Memory Consumption

    Dear All,
                          I want to see the user sessions memory consumption for portal 7.0. i.e. if a Portal user opens a session, how much memory is consumed by him/her. How can i check this. Any default value that is associated with this?
    Backend System memory load will get added to portal consumption or to that specific Backend System memory consumption.
    Thanks in Advance......
    Vinayak

    I'm seeing the exact same thing with our setup (it essentially the same
    as yours). The WLS5.1 documentation indicates that java objects that
    aren't serializeable aren't supported with in-memory replication. My
    testing has indicated that the <web_context>._SERVLET_AUTHENTICATION_
    session value (which is of class type
    weblogic.servlet.security.ServletAuthentication) is not being
    replicated. From what I can tell in the WLS5.1 API Javadocs, this class
    is a subclass of java.lang.object (doesn't mention serializeable) as of
    SP9.
    When <web_context>._SERVLET_AUTHENTICATION_ doesn't come up in the
    SECONDARY cluster instance, the <web_context>.SERVICEMANAGER.LOGGED.IN
    gets set to false.
    I'm wondering if WLCS3.2 can only use file or JDBC for failover.
    Either way, if you learn anything more about this, will you keep me
    informed? I'd really appreciate it.
    >
    Hi,
    We have clustered two instances of WLCS in our development environment with
    properties file configured for "in memory replication" of session data. Both the
    instances come up properly and join the cluster properly. But, the problem is
    with the in memory replication. It looks like the session data of the portal is
    getting replicated.
    We tried with the simplesession.jsp in this cluster and its session data is properly
    replicated.
    So, the problem seems to be with the session data put by Portal
    (and that is the reason why I am posting it here). Everytime the "logged in "
    check fails with the removal of one of the instances, serving the request. Is
    there known bug/patch for the session data serialization of WLCS? We are using
    3.2 with Apache as the proxy.
    Your help is very much appreciated.--
    Greg
    GREGORY K. CRIDER, Emerging Digital Concepts
    Systems Integration/Enterprise Solutions/Web & Telephony Integration
    (e-mail) gcrider@[NO_SPAM]EmergingDigital.com
    (web) http://www.EmergingDigital.com

  • How to find out memory consumption for table in HANA without load it into memory

    Hi,
    To determine the memory consumption for a table in HANA, you can query table M_CS_TABLES, however, it requires load table into memory first, I just wonder if there has another table store memory consumption information for all HANA tables regardless it load into memory or not. Below is screenshot for one of table in my system, since that table is partially loaded into memory, "Total Memory Consumption (KB):" tell me the memory consumption of the portion load into memory, what I am looking for is something like "Estimated Maximun Memory Consumption (KB)" which provides me total memory consumption for that table including portion doesn't load into memory, of course I can use this Esitmated information, but consider I have close to thousand tables in my HANA system already, it's not pratical to check tables one by one.
    Thanks,
    Xiaogang.

    Hi Xiaogang,
    Estimated Memory Size that you see in the Table Run time Information - same is available in M_CS_TABLES also
    If you don't get the size of any Table in M_CS_TABLES View, then the same will also not be available in Runtime information of the Table
    Even if tables are not loaded into memory, you can get the Estimated Size, just try running the query with filter LOADED = 'NO'
    Regards,
    Vivek

  • Memory Consumption in Multidimensional Arrays

    Hi,
    I've noticed that the memory consumption of multidimensional arrays in Java is sometimes far above one could expect for the amount of data that is being stored. For example, here is a simple program which stores a table containing only integers and reports the memory consumption after it is filled:
    public static void main(String[] args) {     
    int tableSize = 1000000;
    int noFields = 10;
    Random rnd3 = new Random();          
    int arr[][] = new int[tableSize][noFields];
    for (int i = 0; i < tableSize; i++) {
    for (int j = 0; j < noFields; j++) {
         arr[i][j] = rnd3.nextInt(100);
    Runtime.getRuntime().gc();
    Runtime.getRuntime().gc();
    Runtime.getRuntime().gc();                    
    // Ensures table's data is still referenced
    System.out.println(arr[rnd3.nextInt(arr.length)]);
    long totalMemory = Runtime.getRuntime().totalMemory();
    long usedMemory = totalMemory-Runtime.getRuntime().freeMemory();
    System.out.println("Total Memory: " + totalMemory/(1024.0*1024) + " MB.");
    System.out.println("Used Memory: " + usedMemory/(1024.0*1024) + " MB.");          
    Output:
    Total Memory: 866.1875 MB.
    Used Memory: 62.124053955078125 MB.
    In this case the memory consumption was around 20MB above the expected 38MB required for storing 10M integers. The interesting thing is that the memory consumption varies when the numbers of rows and columns are changed, even though the total amount of items is kept fixed (see below):
    Rows:100; Cols:100000 -> Used Memory: 43,05 MB
    Rows:1000; Cols:10000 -> Used Memory: 43,07 MB
    Rows:10000; Cols:1000 -> Used Memory: 43,24 MB
    Rows:100000; Cols:100 -> Used Memory: 44,96 MB
    Rows:1000000; Cols:10 -> Used Memory: 62,15 MB
    Rows:10000000; Cols:1 -> Used Memory: 192,15 MB
    Any ideas about the reasons for that behavior?
    Thanks,
    Marcelo

    mrnm wrote:
    In this case the memory consumption was around 20MB above the expected 38MB required for storing 10M integers.That's only the expected value if you assume that a 2D array of ints is nothing more than a bunch of ints lined up end to end. This is not the case. A "2D" array in java is really just a plain ol' array whose component type is "reference to array".
    The interesting thing is that the memory consumption varies when the numbers of rows and columns are changed, even though the total amount of items is kept fixed (see below):That's because, e.g., new int[200][100] creates 200 array objects (and references to each of them), each of which holds 100 ints, while new int[100][200] creates 100 array objects (and references to each of them), each of which holds 200 ints.
    Edited by: jverd on Feb 24, 2010 11:17 AM

  • Memory consumption when computer locked?

    I noticed something strange with my app. If I leave it open and switch to login screen, it's memory consumption raises up to gigabyte. I discover this in Activity Monitor when logging in back. But just after I logged back in, process's working set size begins slowly going back to normal, and then there're no leaks reported, and app works just normal.
    Other apps dont issue this. Besides ordinary Cocoa GUI, my app makes use of multithreading, sockets and webcam capture (sequence grabber).
    Looks like there's something specific to fast user switching feature that I don't know, maybe some buffer is infinitely filled until there's chance to display, or something.
    Does anyone have idea what it could be?
    Message was edited by: kasym

    Another point that I wanted to mention...
    As a mentioned, we are looping with our application through a resultset and "processing" each record. If we simply disconnect the sqlca object (the transaction object the PowerBuilder application uses to connect to the database) and then simply re-connect, say, every 100 records or so... the problem goes away. We simply disconnect, re-connect, and pick up at the point where we left off. This shows me the memory gets flushed every time the session is disconnected.
    This is the effect that I want... for the memory to be flushed every so many records, so it can continue looping through each record in the resultset as if it were doing the first one each time. I understand there may be a performance impact as it flushes the memory for each record (or every hundred or so), but I'm willing to sacrifice that to keep it from running out of memory altogether.
    I'd appreciate feedback on this point.

  • SetTransform() memory consumption

    Hi,
    I'm currently working on a application which needs to move a sphere very quickly. The position is calculated every 40 ms and set via the TransformGroup.setTransfom() method. This raises a problem as this statement rapidly consumes huge amounts of memory, especially when called in short time intervals.
    I also tested it with the java3d example program "AWTInteraction" by simply putting the statement in a for loop and watched the memory climb:
    for (int i=0; i<1e+6; i++)
         objTrans.setTransform(trans);The result is a java.lang.OutOfMemoryError
    Is there a soultion or workaround for this kind of problem?
    Any hints appreciated. (Project has to be finished on Monday. It's really urgent.)
    TIA
    Erich

    Erich,
    i've never had any memory problems when dealing with Transforms. Is it perhaps possible that the leakage results from another instruction? For high memory consumption I saw responsible only working with textures so far. In order to find the problem, I would check out three things in your code:
    1. Are you working with textures and if yes, how does the program behave, if the textures are omitted?
    2. Are there any "new" instructions inside your loop? If yes, try to reuse objects and eliminate all "new" commands inside the loop.
    3. Did you consider the Mantra to do all changes on a live scene graph within a behavior (and from the behavior scheduler)? It seems unusual to me to change transforms inside a loop.
    Good luck,
    Oliver

  • Memory limitation for session object!

    what is the memory limitation for using session objects?
    venu

    as already mentioned there is no actual memory limitation within the specification, it only depends on the jvm's settings
    how different app-server handle memory management of session objects is another part of the puzzle, but in general you should not have problems in writting any object to the session.
    we had the requirement once to keep big objects in session, we decided to do a ResourceFactory that returns us the objects, and only store unique-Ids into the session.
    We could lateron build on this and perform special serialization tasks of big objects in the distributed environment.
    Dietmar

  • How to measure memory consumption during unit tests?

    Hello,
    I'm looking for simple tools to automate measurement of overall memory consumption during some memory-sensitive unit tests.
    I would like to apply this when running a batch of some test suite targetting tests that exercise memory-sensitive operations.
    The intent is, to verify that a modification of code in this area does not introduce regression (raise) of memory consumption.
    I would include it in the nightly build, and monitor evolution of summary figure (a-ah, the "userAccount" test suite consumed 615Mb last night, compared to 500Mb the night before... What did we check-in yesterday?)
    Running on Win32, the system-level info of memory consumed is known not to be accurate.
    Using perfmon is more accurate but it seems an overkill - plus it's difficult to automate, you have to attach it to an existing process...
    I've looked in the hprof included in Sun's JDK, but it seems to be targetted at investigating problems rather than discovering them. In particular there isn't a "summary line" of the total memory consumed...
    What tools do you use/suggest?

    However this requires manual code in my unit test
    classes themselves, e.g. in my setUp/tearDown
    methods.
    I was expecting something more orthogonal to the
    tests, that I could activate or not depending on the
    purpose of the test.Some IDEs display mmeory usage and execution time for each test/group of tests.
    If I don't have another option, OK I'll wire my own
    pre/post memory counting, maybe using AOP, and will
    activate memory measurement only when needed.If you need to check the memory used, I would do this.
    You can do the same thing with AOP. Unless you are using an AOP library, I doubt it is worth additional effort.
    Have you actually used your suggestion to automate
    memory consumption measurement as part of daily builds?Yes, but I have less than a dozen tests which fail if the memory consumption is significantly different.
    I have more test which fail if the execution time is siginificantly different.
    Rather than use the setUp()/tearDown() approach, I use the testMethod() as a wrapper for the real test and add the check inside it. This is useful as different test will use different amounts of memory.
    Plus, I did not understand your suggestion, can you elaborate?
    - I first assumed you meant freeMemory(), which, as
    you suggest, is not accurate, since it returns "an
    approximation of [available memory]"freeMemory gives the free memory from the total. The total can change so you need to take the total - free as the memory used.
    - I re-read it and now assume you do mean
    totalMemory(), which unfortunately will grow only
    when more memory than the initial heap setting is
    needed.more memory is needed when more memory is used. Unless your test uses a significant amount of memory there is no way to measure it reliably. i.e. if a GC is perform during a test, you can have the test appear to use less memory than it consumes.
    - Eventually, I may need to inlcude calls to
    System.gc() but I seem to remember it is best-effort
    only (endless discussion) and may not help accuracy.if you do a System.gc(); followed by a Thread.yield() at the start it can improve things marginally.

  • Problems updating projects to new versions of Premiere (CS5 to CC and CC to CC 2014) Memory consumption during re-index and Offline MPEG Clips in CC 2014

    I have 24GB of RAM in my 64 bit Windows 7 system running on RAID 5 with an i7 CPU.
    A while ago I updated from Premiere CS5 to CC and then from Premiere CC to CC 2014. I updated all my then current projects to the new version as well.
    Most of the projects contained 1080i 25fps (1080x1440 anamorphic) MPEG clips originally imported (captured from HDV tape) from a Sony HDV camera using Premiere CS5 or CC.
    Memory consumption during re-indexing.
    When updating projects I experienced frequent crashes going from CS5 to CC and later going from CC to CC 2014. Updating projects caused all clips in the project to be re-indexed. The crashes were due to the re-indexing process causing excessive RAM consumption and I had to re-open each project several times before the re-index would eventually complete successfully. This is despite using the setting to limit the RAM consumed by Premiere to much less than the 24GB RAM in my system.
    I checked that clips played; there were no errors generated; no clips showed as Offline.
    Some Clips now Offline:Importer  CC 2014
    Now, after some months editing one project I found some of the MPEG clips have been flagged as "Offline: Importer" and will not relink. The error reported is "An error occurred decompressing video or audio".
    The same clips play perfectly well in, for example, Windows Media Player.
    I still have the earlier Premiere CC and the project file and the clips that CC 2014 importer rejects are still OK in the Premiere CC version of the project.
    It seems that the importer in CC 2014 has a bug that causes it to reject MPEG clips with which earlier versions of Premiere had no problem.
    It's not the sort of problem expected with a premium product.
    After this experience, I will not be updating premiere mid-project ever again.
    How can I get these clips into CC 2014? I can't go back to the version of the project in Premiere CC without losing hours of work/edits in Premiere CC 2014.
    Any help appreciated. Thanks.

    To answer my own question: I could find no answer to this myself and, with there being no replies in this forum, I have resorted to re-capturing the affected HDV tapes from scratch.
    Luckily, I still had my HDV camera and the source tapes and had not already used any of the clips that became Offline in Premiere Pro CC 2014.
    It seems clear that the MPEG importer in Premiere Pro CC 2014 rejects clips that Premiere Pro CC once accepted. It's a pretty horrible bug that ought to be fixed. Whether Adobe have a workaround or at least know about this issue and are working on it is unknown.
    It also seems clear that the clip re-indexing process that occurs when upgrading a project (from CS5 to CC and also from CC to CC 2014) has a bug which causes memory consumption to grow continuously while it runs. I have 24GB RAM in my system and regardless of the amount RAM I allocated to Premiere Pro, it would eventually crash. Fortunately on restarting Premiere Pro and re-loading the project, re-indexing would resume where it left off, and, depending on the size of the project (number of clips to be indexed), after many repeated crashes and restarts re-indexing would eventually complete and the project would be OK after that.
    It also seems clear that Adobe support isn't the greatest at recognising and responding when there are technical issues, publishing "known issues" (I could find no Adobe reference to either of these issues) or publishing workarounds. I logged the re-index issue as a bug and had zero response. Surely I am not the only one who has experienced these particular issues?
    This is very poor support for what is supposed to be a premium product.
    Lesson learned: I won't be upgrading Premiere again mid project after these experiences.

Maybe you are looking for