Converting from non-static to static

I am tyring to convert a method from non-static to static. I changed a few variables to get it to work and everything is now static in the method except for the getClass() method which it calls, does anyone know how I could work around this.
Heres the method -
public static void addObjectToPanel(String type, String name)
        JLabel label = new JLabel("The object");
       // The getClass() here is causing the problem
        label.setIcon(new ImageIcon(getClass().getResource("object.gif")));
        label.setBounds(xCoordinate, yCoordinate, width, height);
        display.setLayout(null);
        display.add(label);
        // After the image is placed by this method, then the x coordinate is moved to allow for the next object
        xCoordinate +=150;
        // This resizes the JPanel as images are added, this solves a potential issue with the JScrollPane
        display.setPreferredSize(new Dimension( (900+xCoordinate), (600+yCoordinate) ));Is there any way I can get around using getClass() or is there anything else I could do, thanks.

It's a static method. You should know what thename
of the class that it is in is.Sorry, I didn't see that they were talking about the
actuall class that the method is in, i missed
something written earlier.
I am now using label.setIcon(new
ImageIcon("object.gif"));and everything is working fine now.
Thanks everyone for your help.Be warned that this is not the same as what you did initially. The initial version looks in the class loading path. The final version looks in the current directory. I believe your intention is better reflected by the initial version.

Similar Messages

  • Converting from non-batch to batch management

    Hi guys,
    Can you tell what is the best strategy to convert a system from non-batch managed to batch managed? Batch management should have been implemented from the start, but my client just realized that mistake. This is knowing that we already have inventory and all kinds of transactional data: purchase orders, sales order, production orders...
    I am looking for the less painfull strategy, as detailed as possible.
    Thank you.
    Fotso

    HI,
    Hi,
    Procedure to be followed to activate Batch Management for Materials: -
    1. Mark Deletion Flag for all the Open Purchasing Documents of Materials
    2. Technical Completion of Production Orders to cancel open Reservations of Materials
    3. Create a Dummy Material for Material (Batch Management Active)
    4. Generate details of Unrestricted Stock of Materials as per Bill of Entry
    5. Transfer Stock of Materials to Dummy Material using Movement Type "309" in previous period (Stock will be transferred Bill of Entry wise)
    6. Activate batch Management for Materials
    7. Transfer Stock from Dummy Material to Materials
    Note: -
    1) If there is stock available in previous period and also stock available in current period then Transfer Posting from Material to Material can be done directly in previous period. (Comparison of Tables MARD and MARDH)
    2) In case if there is stock in previous period but no stock available in current period then we have to post an initial stock entry using movement type "561" to generate stock in current period (Here Quantity = Stock in Previous period)
    3) Reverse these Material Documents in previous period
    There are several SAP notes that give some useful information:
    30656 Change base unit of measure/batch mngt requirement
    533383 Resetting batch management requirement
    533377 Conversion of batch level
    I hope this helps!
    Thanks & Regards,
    Kiran

  • Importing from an Oracle database converted from non-unicode to unicode - SSIS 2012

    Hi,
    I've some SSIS 2012 pkgs that read from a non-unicode Oracle database to a non-unicode SQL Server database.
    Few days later, the Oracle database will be converted to unicode one and so I need to update the SQL database and the SSIS pkgs. Obiously no data conversion transformations are present in the pkgs. I'd like to avoid to add more of these data conversions.
    As a first step, I'm trying to convert a SQL table to unicode format, but it isn't possible to convert non-unicode and unicode string data types.
    I did not expect to have an error about the conversion from to non-unicode Oracle database (not yet converted) to unicode SQL Server database: a such conversion doesn't lost any information content. For me, it is right to have an error by from unicode to
    non-unicode conversion.
    Any suggests to solve this issue with a minimum development effort? Many thanks

    Nope once you change datatypes to unicode you've to refresh metadata settings within SSIS packages for it to work fine. SSIS doesnt have the ability to do metadata changes at runtime. So someone has to update package metadata information to make them unicode
    to make sure it works correctly after the changes.
    What you can do is create test dbs in oracle and sql in uncode and create some modified packages based on it. Once you make changes to production dbs you need to move these modified copies also to production after making necessary config value changes like
    servername,dbname,folder paths etc and then they will continue to work normally without causing any further downtimes to the customer.
    Please Mark This As Answer if it helps to solve the issue Visakh ---------------------------- http://visakhm.blogspot.com/ https://www.facebook.com/VmBlogs

  • Converting from non partitioned to partitioned table

    Hi gurus,
    I need to convert non partition to partition table.  Most flexible way  is using  DBMS_REDEFINITION package for this.
    i dont have access to execute this package , when i asked for EXECUTION permission for my dev 
    CLIENT rejected with suggestion that
      " DBMS_REDEFINITION is a very slow migration method which has never been used before here for such migrations
    so I would not recommend using it as it could trigger bugs and unexpected side effects"
    is it true? 
    what will be alternate method i can go far?
    Please suggest
    S

    I don't think DBMS_REDEFINITION has bugs.  However, you (and the client) need to be familiar with the steps involved.
    Other than that, you'd have to build a partitioned table and insert data from the existing table.  You can speed up the insert with direct path Parallel Insert.
    You also need to  build indexes on the new (partitioned) table.  Define constraints if necessary.  Execute grants to other schemas, if required.
    Hemant K Chitale

  • Convert from Non-RAC to RAC-What should be the approach?

    Hi All,
    We have a single node installation with EBS 11.5.10.2 and 9iR2 db.
    Our requirement is to upgrade the db to 10gR2 and implement RAC.
    I am really confused as to how should we proceed with this.
    1.If we upgrade the database to 10g first,can it be converted to RAC after this upgradation?If yes then how do we do it??Using rconfig???
    2.What exactly we need to do regarding the filesystem?At which step this conversion has to be done??
    3.If our current filesystem is raw,then at which point we need to convert it to ASM/OCFS??ie before upgrading the db or after it is upgraded to 10g??
    Looking forward for inputs..
    Thanks in advance.

    You can do it either way. It kind of depends on your requirements. Do you have minimal downtime requirements to meet? Do you have the ability to set up you hardware at your leisure, or do you have to reuse your current hardware? Given all the time and hardware I needed, I would do it like this:
    1. Acquire the new hardware.
    2. Configure the hardware and shared storage.
    3. Install clusterware.
    4. Install Oracle 10gr2.
    5. Backup your old instance.
    6. Migrate your old database using export/import or whatever method you like.
    Whatever method you choose, it is vital that you rehearse on a non-critical instance to make sure your methodology is valid.

  • Button to Convert from Dynamic to Static form Attach to Email then Convert Back

    I have this working, but only on reader 9. What I need is a different way to accomplish this. The scenario is I have a dynamic form that we fill out for a customer. We want to email it to a customer, but we don't want them to have access to our calculations.
    So I want a button that turns the dynamic form into a static form than attach the static PDF to an email, than I want it to change back to dynamic on the original.
    This works in acrobat reader 9, but not in reader 7.
    my code for the submit button is this
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    // Set the field property.
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "readonly";
    that code works fine in acrobat reader 7 and later, the problem comes in when I switch it back to dynamic with this code in the postsubmit event
    for (var nPageCount = 0; nPageCount < xfa.host.numPages; nPageCount++) {
    var oFields = xfa.layout.pageContent(nPageCount, "field");
    var nNodesLength = oFields.length;
    // Set the field property.
    for (var nNodeCount = 0; nNodeCount < nNodesLength; nNodeCount++) {
    oFields.item(nNodeCount).access = "open";
    This nets me an error on opening the form in any reader previous to reader 9:
    Invalid enumerated value:postSubmit
    Anyone have any ideas?
    If there is a way to do this in formcalc I would rather do it that way, because there are a few fields I'd like to hide before the customer gets it as well.
    Thanks

    Well, I don't know, but one might suspect that one of these times when the DNG converter gets upgraded (and it does from time to time) one of the things that might happen is that at some point Nikon might free up all its proprietary stuff in a NEF and that would be available to Adobe to include in an upgrade.
    Hmmmm, in that case it might be worthwhile to reconvert NEFs--but that would mean redeveloping all those digital negatives.... Whoa....maybe I better stop this line of reasoning before I really upset you and others who have converted from NEFs. You are discovering some of the reasons why I don't do that just to save a little space--and be free of sidecars, of course (a noble motive, for sure).
    I do convert for other shooters who are clients, but I always store the NEFs and DINGs and derivatives together in the same folder, which of course raises storage loads, but that is the way they want them.
    Try the DNG UtoU forum. There are some very knowledgeable guys over there.

  • Converting from a Business View connection to a non-Business View connectio

    I have a report on our Crystal Reports Server that I need to move to a machine with Crystal Reports XI installed and no connection to the Business Objects Server.
    When I create an ODBC connection pointing to the same table and then try to use Set Datasource Location to Update the Data Source I get an errormessage:
    Converting from a Business View connection to a non-Business View connection is not supported.
    What can I do? It is a complex report that I don't want to take the time and energy to rewrite.
    Thank You for any answers.
    <a href="http://farm4.static.flickr.com/3234/2942702182_72365cc04f_o.jpg">Screen Shot</a>

    Hello Brundibar,
    I recommend to post this query to the [Universe Designer and Business Views Designer|Semantic Layer; forum.
    This forum is dedicated to topics related to the universes and business views.
    It is monitored by qualified technicians and you will get a faster response there.
    Also, all Universe Designer and Business Views Designer queries remain in one place and thus can be easily searched in one place.
    Best regards,
    Falk

  • Converting from a Business View connection to a non-Business View Error

    I have CR 2008 report with SP1. The report has a sub report. The main report is based off of a BV but the sub report is using an ODBC connection. When I try to update the ODBC connection I recieve an error message telling me that:
    Quote:
    "Converting from a Business View connection to a non-Business View connection is not supported."
    I am do not want to update the BusinessView just the ODBC connection for the subreport.

    Hi Michael,
    When you try to update both the Business views and ODBC connection you will be  prompted for  those errors.
    You need to break the link  in the report and update the connections individually.
    Thanks,
    Naveen.

  • Converting from finished products non-batch to batch management

    Hi guys,
    Can you tell what is the best strategy to convert a system from non-batch managed to batch managed for the finished and salable product? Batch management should have been implemented from the start, but my client just realized that mistake. This is knowing that we already have inventory and all kinds of transactional data: purchase orders, sales order, production orders...
    I am looking for the less painfull strategy, as detailed as possible.
    Thank you.
    Fotso

    Its going to be painful.............
    1) Reverse all material stocks. Create a new cost center and consume the available stocks to that cost center using 201 mvt type.
    2) If there are any open sale orders for that material delete the line item in those sale orders.
    3) Update the batch management indicator in the material master.
    4) Add back the material in the respective sale orders.
    5) Post the stocks with batches by reversal mvt type 202 from the cost center.
    Regards,
    GSL.

  • Is there an easy way to convert from a String with a comma, ie. 1,000.00 to

    Is there an easy way to convert from a String with a comma, ie. 1,000.00 to a float or a double?
    thanks,
    dosteov

    Like DrClap said: DecimalFormat. However, make sure you understand the Locale things, as explained at http://java.sun.com/j2se/1.3/docs/api/java/text/DecimalFormat.html (which refers to NumberFormat as well) and use them explicitly. Like
    public class FormatTest
      public static void main(String args[])
        try
          // see NumberFormat.getInstance()
          DecimalFormat fmt = new DecimalFormat();
          Number num = fmt.parse("1,000.01");
          System.out.println(num.doubleValue());
        catch(ParseException pe)
          System.out.println(pe);
    }most likely seems OK on your system, but may print "1.0" (or even fail) on some non-English platforms!
    When performing calculations, also see http://developer.java.sun.com/developer/JDCTechTips/2001/tt0807.html
    a.

  • Error:Type mismatch: cannot convert from Long to Long

    hi friends,
    I've a problem.I've a JSP that does some long converions,and its working fine when i make it run on my machine i.e
    (Running Tomcat5.0),but when I deploy this file on the server which runs Tomcat 5.5.7,it throws this error:
    org.apache.jasper.JasperException: Unable to compile class for JSP
    An error occurred at line: 20 in the jsp file: /abc.jsp
    Generated servlet error:
    Type mismatch: cannot convert from Long to Long
    Can anyone of you,tell me where i am going wrong???

    Here is an example of doing it with a JavaBean... the bean looks like this:
    package net.thelukes.steven;
    import java.io.Serializable;
    import java.text.DateFormat;
    import java.text.ParseException;
    import java.text.SimpleDateFormat;
    import java.util.Date;
    public class FormHandlerBean implements Serializable {
         private static final long serialVersionUID = 1L;
         private Date startTime = null;
         private DateFormat dateFormatter;
         public FormHandlerBean() {
              setDateFormat("yyyy-MM-dd hh:mm:ss");
         public void setStart(String strt) {
              setStartAsString(strt);
         private void setStartAsString(String strt) {
              setStartAsDate(getDate(strt));
         private void setStartAsDate(Date d) {
              startTime = d;
         private Date getDate(String s) {
              Date d = null;
                   try {
                        d = dateFormatter.parse(s);
                   } catch (ParseException pe) {
                        System.err.print("Error Parsing Date for "+s);
                        System.err.println(".  Using default date (right now)");
                        pe.printStackTrace(System.err);
                        d = new Date();
              return d;
         public long getStartAsLong() {
              return getStart().getTime();
         public String getStartAsString() {
              return Long.toString(getStartAsLong());
         public Date getStart() {
              return startTime;
         public void setDateFormat(String format) {
              dateFormatter = new SimpleDateFormat(format);
    }You would only need to make the getStartXXX methods public that need to be accessed from the JSP. For example, if you will not need to get the Long value of the time, then you do not need to make getStartAsLong public...
    The JSP looks like this:
    <html>
      <head>
        <title>I got the Form</title>
      </head>
      <body>
        <h3>The Output</h3>
        <jsp:useBean class="net.thelukes.steven.FormHandlerBean" id="formHandler"/>
        <%
             formHandler.setStart(request.getParameter("start"));
        %>
        <table>
          <tr><td>Start as String</td><td><jsp:getProperty name="formHandler" property="startAsString"/></td></tr>
          <tr><td>Start as Date</td><td><jsp:getProperty name="formHandler" property="start"/></td></tr>
          <tr><td>Start as long</td><td><jsp:getProperty name="formHandler" property="startAsLong"/></td></tr>
        </table>
      </body>
    </html>If this were a servlet processing the form rather than a JSP, I might throw the ParseException that might occur in getDate and catch it in the Servlet, with which I could then forward back to the form telling the user they entered mis-formatted text value... But since JSPs should be mainly display, I catch the exception internally in the Bean and assign a safe value...

  • Display from non-MIDlet class?

    Display from non-MIDlet class?
    I havel a method in my main MIDlet that flashes up a message as an Alert:
    public void doAlert(String sAlertText) {
           Alert alSending;
           alSending = new Alert(null, sAlertText, null, AlertType.INFO);
           alSending.setTimeout(3000);
            m_display.setCurrent(alSending);
        }m_display is declared and instantiated in the main MIDlet:
    public class Boss extends MIDlet implements CommandListener {
            public static Display m_display;
            public Boss() {       /** Constructor*/
                    m_display = Display.getDisplay( this );
    }Now when I try to call this from another class:
         new Boss().doAlert("Eat lead, Bambi!");I get: "SecurityException: Application not authorized to access the restricted API". The same happens if I try to use a reference to a MIDlet, rather than to a Display. Apparently, you can't instantiate a MIDlet from with another MIDlet / class.
    So, how do you obtain a reference to the currently-running MIDlet or its Display from a different class?

    Thanks for your reply, but i finally solved it. I found a way of getting a reference to my MIDlet, so i had the control of its display.
    Maybe it could be a good idea having some classes to write to the cellular's screen without being a MIDlet, like System.out.* classes in traditional Java.
    See you.
    David.

  • I have taken off/turned off iCloud on my mac mini but when I write an email   and use contacts it will convert a non iCloud email to and iCloud email automatically.  I really don't want this. Any way to stop this automatic conversion?

    I have taken off/turned off iCloud on my mac mini (OS 10.8) but when I send an email and use contacts , it will convert the non - iCloud email to an iCloud email automatically.  Anyway to stop this automatic conversation?    

    Robert...
    the iCloud webserver wont accept my password for a .mac login, nor will it allow me to change it
    See if you can change your password >  Apple - My Apple ID
    If that doesn't help, launch iTunes on your computer.
    From the iTunes menu bar click iTunes / Preferences then select the Advanced tab.
    Click: Reset warnings and Reset cache
    Click OK.
    Restart your computer.
    If that that doesn't help...
    Moreover, when I try to go into my .mac account on the web,
    Delete all apple cookies and empty your browser cache.
    See if  you can access your account at iCloud.com

  • Problem in Database convertion from US7ASCII to UTF8

    Hi,
    We are facing the following problem while converting the database from US7ASCII to UTF8:
    We have recently changed the database character set from US7ASCII to UTF8 for the internationalization
    purpose. We ran the Character set scanner utility and it did report that some data may pose problems.
    We followed the the below mentioned process to convert into UTF8 -
    1) alter database character set utf8
    2) alter database national character set utf8.
    Now we find some problem while working with the old data in our application which is java based.
    We are getting the following error "java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv".
    We further analyzed our data and found some interesting things :
    e.g.
    DB - UTF8.
    NL_LANG also set to UTF8.
    Select name from t1 where name like 'Gen%';
    NAME
    Genhve
    But when we find out the length of the same data it show like this
    NAME LENGTH(NAME) VSIZE(NAME)
    Genhve 4 6
    The question is why is it showing length as 4 only and when we try to use a substr function
    its extracting like the following :-
    select name,substr(name,4,1) from t1 where name like 'Gen%';
    NAME SUB
    Genhve hve
    We have execute the above queries on US7ASCII DB and it is working fine, length it shows 6
    and using SUBSTR it extracts just 'h' as well.
    We also used dump function on the UTF8 Db for the above query,,this is the result :-
    select name,length(name),vsize(name),dump(name) from t1 where name like 'Gen%';
    NAME LENGTH(NAME) VSIZE(NAME) DUMP(NAME)
    Genhve 4 6 Typ=1 Len=6: 71,101,110,232,118,101
    We checked a lot with the data and it seems 'h' (accented e) is posing the problem.
    We want to know where is the problem and how to overcome this.
    Further, we tried all of the following :
    1)
    Export Server: US7ASCII
    Export Client: did not set NLS_LANG / NLS_CHAR, so presumably US7ASCII as well
    Import Client: did not set NLS_LANG / NLS_CHAR, so presumably US7ASCII as well
    Import Server: UTF8
    RESULT: Acute e became h
    2)
    Export Server: US7ASCII
    Export Client: did not set NLS_LANG / NLS_CHAR, so presumably US7ASCII as well
    Import Client: NLS_LANG=AMERICAN_AMERICA.UTF8 and NLS_CHAR=UTF8
    Import Server: UTF8
    RESULT: IMP 00016 error
    3)
    Export Server: US7ASCII
    Export Client: NLS_LANG=AMERICAN_AMERICA.UTF8 and NLS_CHAR=UTF8
    Import Client: did not set NLS_LANG / NLS_CHAR, so presumably US7ASCII as well
    Import Server: UTF8
    RESULT: Acute E became h
    4)
    Export Server: US7ASCII
    Export Client: NLS_LANG=AMERICAN_AMERICA.UTF8 and NLS_CHAR=UTF8
    Import Client: NLS_LANG=AMERICAN_AMERICA.UTF8 and NLS_CHAR=UTF8
    Import Server: UTF8
    RESULT: Acute e became h
    5)
    Tried using Update sys.props$
    set value$='UTF8'
    where name='NLS_CHARACTERSET'
    RESULT: Acute e shows properly but it gives problem in the application
    "java.sql.SQLException: Fail to convert between UTF8 and UCS2: failUTF8Conv"
    Looking further it was observed the following:
    when you try this command on a column 'city' in a table which contains 'Genhva' (note the acute e after n), it shows
    command: select length(city), vsize(city),substr(city,4,1),city from cities
    Result: 4 6 hva Genhva
    if you see the value of substr(city,4,1) , you will see the problem. Also note that the length shows 4 and size shows 6. Moreover, when these records are selected through JDBC Driver, it starts giving problems.
    6)
    Actually the above (point no. 5) is similar to changing the character set of the database with 'ALTER DATABASE CHARACTER SET UTF8'. Same problem is observed then too.
    7)
    We have also tried to with another method, that is by changing the third byte of the export file which specifies the character set, to the UTF8 code by editing the export file with a Hexdecimal editor. After import the same problem has been observed as defined in (5) and (6) above.
    We have no more ideas how to migrate without corrupting the data. Of course we have known the records where these characters occur through the Oracle's cssacn utility but we do not want to manually rectify each and every record by replacing it with an ASCII character. Any other idea as to how this can be accomplised?
    Thanx
    Ashok

    The problem you have is that although your original database is defined as US7ASCII, the data it contains is more than is covered by this code page (as the reply on Sept 4 to the previous posting already said).
    This has probably happened because the client was also defined as US7ASCII, and when the DB and client are defined as having the same character set no conversion (or checdking) takes place when data is passed between them. However if you are using a Windows client then it will in fact be using Windows code page 1252 (Latin-1) or similar, and this allows many more characters, including h (accented e). So a user can enter all these characters and store them in the database, and similarly read them from the database, because data transfer is transparent.
    When you did ALTER DATABASE CHARACTER SET UTF8 this will only change the label on the database, but not affect the contents. However only part of the contents are valid UTF8, any character above 7F (like h) is invalid. If your original client now uses the database, code page transformation will take place because the client and DB have different character sets defined. The invalid codes can then cause problems.
    Without being able to explain what has happened in detail, it may help to see what your h (dec 232, x'E8') looks like. The actual data has not changed (you can see this as it is reported as 232). However the binary code there (11101000) is invalid UTF8. UTF8 encodes a character in 1 to 4 bytes, and the first bits in a UTF8 character tell how many bytes it uses. 0xxx tell it is one byte (same as the corresponding USASCII character), 110x that it uses 2 bytes, 1110 that it uses 3 bytes etc. So if you interpret what is there as UTF8 it looks like the first byte of a 3-byte character, which explains why the substringing is giving you the other 2 bytes as well.
    Can you fix this without losing data? I believe yes. First you should check what other characters are being flagged by the scan. See if these are contained in another standard character set. If they are only Western European accentet characters then WE8ISO8859P1 is probably ok, but watch out for the euro sign which Windows has at x'80', an undefined character in ISO8859-1.
    You can see the contents of the Microsoft Windows Codepage 1252 at: http://www.microsoft.com/globaldev/reference/sbcs/1252.htm
    For a listing of the US-ASCII defined characters see http://czyborra.com/charsets/iso646.html and for ISO 8859-1 see http://czyborra.com/charsets/iso8859.html#ISO-8859-1
    If all is well, you can first ALTER DATABASE CHARACTER SET to WE8ISO8859P1. This will not change any data, but ensure that all the data gets exported. Then export the DB and import it to a UTF8 DB. This will convert the non-US-ASCII characters to Unicode. You will also have to change the clients character set to something other than USASCII or they will just see ? for the other characters.
    Good Luck!

  • Convert from PSE10 to Lightroom - metadata concerns

    After one too many frustrations with PSE, I've been looking for an alternative program.  I don't care about editing - metadata (Organizer type features) are my focus.  I looked at a few non-Adobe programs - a big problem with them is I can't import my PSE10 catalog, with 12,000 photos.  On the PSE forum, the most common advice was to go to Lightroom, so I was holding out hope for that.  However, I've played with the LR 5 trial for half a day now, and as far as I can tell, it doesn't make good use of the PSE catalog either!  This is very disappointing for me, but I'm wondering if some of you folks can show me things I'm missing.
    I'm looking into to converting from PSE10 to LR 5 on a Windows 7 machine.  I have about 12,000, half TIFs from scanned images and half JPGs from digital cameras.  As I said, I only care about metadata management.  My focus is genealogy, so getting the original dates of scanned pictures captured is very important, and these often are only roughly known.
    My main issue is that it seems to me that LR is taking the metadata from the photo files, not the catalog.  I used the LR feature "Upgrade Photoshop Elements Catalog".  The reason I say this is for two reasons:
    (1) When PSE10 writes tags to files, it adds to any tags that are already there.  So although PSE10 shows only the updated set of tags in the GUI, the photo files have both old and new tags.  After doing the LR Upgrade PSE Catalog, I'm seeing old and new tags.  It appears to be ignoring what is in the catalog and looking at the file.  (As far as I know, the old tags are not present in the PSE catalog anymore, so it can't be looking at the converted catalog.)
    (2) PSE10 allow you to tag photos with incomplete dates (e.g. no time, no day and/or no month).  But if PSE10 has an incomplete date, it won't write it to the file (very annoying for working with historical photos).  But LR does not show these partial dates, it only shows complete dates.  In this case, I'm not sure if LR is showing the metadata in the photo files, or if it dropped the PSE10 catalog date info during the "Upgrade".
    So my main question is, am I missing something with regards to the import, or am I correct in my impression that this "Upgrade PSE Catalog" isn't doing much of anything?  (ExifTool shows all the metadata in the photo files -- much more complete than LR for that matter -- so what am I gaining by importing the PSE catalog into LR?)
    A couple other miscellaneous questions if I may:
    -- In PSE10, there is a notes field in the metadata, which goes into XMP-album in the photo files.  Does LR not support this?  As far as I can tell, LR supports only a few XMP namespaces, with no support for others.
    -- In LR in the Folder pane, I have no scroll bar, so I can only see the first few entries.  This seems to be a serious bug to me.  Or am I missing something?
    -- It appears in LR that one cannot show your entire catalog in the main pane sorted by folder then by file name.  Is this correct?  (This also seems to be a major drawback to me!)
    -- When I try to change the date of a photo (Edit Capture Time in LR), it forces you to enter a complete date/time, unlike PSE10.  Overall, it is much easier to manage these dates in PSE, it seems.  Does LR handle incomplete dates to any level?
    -- When I did my PSE import/"Upgrade", it assigned the wrong file names to some of my photos (it used file names from other photos in the catalog).  Is this a known issue?
    I imagine I have a very unusual vantage point, but at this point it seems that PSE is far superior to LR!  And I'm not happy with PSE10!!  Anyways, thanks in advance for your input.  Sorry about the long post - wasn't sure if I should divide it up or not.
    Bill

    This is a reply to the post from John R. Ellis...
    You mentioned "Adobe has never bothered to fix...".  Note that the last post by Michel B in this thread
    http://www.elementsvillage.com/forums/showthread.php?t=78779&page=2
    indicates that PSE11 can handle this.
    You say "After importing.... I used ExifTool to append..."  Did you do this file by file, or with a script?  A Perl script?  (The reason I ask is that I would have to learn Perl, so I would have to decide how much this means to me...)
    Regarding your suggestion to use fake dates
    "(e.g. "1970" => "1/1/1970 00:00:00")"
    I've been aware of this since I read it in your psedbtool material a year or two ago, but I've resisted making such changes.  When dealing with historical photos, you want the ambiguity.  In fact, you'd like to use things like "About 1970" and such.  I wish the photo metadata community would get on board with that.
    Regarding your point about catalog conversion, for me it sounds like it would be just as good to import the photo files directly.  (Better for me, I think, since converting the catalog seems to lose the folder hierarchy in the LR Folder view for some reason.  However, if I'm going to import the photo files directly, then LR has no advantage for me over any other program that I can see.  The ExifTool GUI utility does a much better job with the metadata in general - the key drawback for me being the fact that you have to type out the keywords each time you want to add one to photo.  Looking at the ACDSee Pro trial, that appears to handle metadata better than any of the Adobe products - the critical drawback being that it uses its proprietary XMP namespaces, meaning interoperability issues.  LR has drawbacks for me, such as not being able to sort by folder than filename in the main view pane and the lack of support for other XMP namespaces (and the Folder pane scrollbar thing).  So at this point I'm in a state of depression.  As far as I can tell, I only have a partial solution to my problems - upgrade from PSE10 to PSE11.  Truly a depressing notion!!

Maybe you are looking for