Tables in mapping losing location after import

Hi, I have 2 OWB repositories (lets say Dev and Prod). Both contain the same objects but reside on different databases.
If I make a change in a mapping in Dev, export the mapping and import into Prod, the table references within the mapping have lost their locations and I need to edit the mapping and point them to the correct locations.
Any way around this or explaination of why it is happening ?
Thanks
Paul

Hi Paul,
try to import the mappings using "match by name" and not "match by universal identifier".
The default import mode is "match by universal identifier". If your location on prod has an identifier different from the one on dev, it will not be found during import.
If you import using "match by name", owb will look for a location on prod with the same name as the location on dev.
Regards,
Carsten.

Similar Messages

  • Template location after import

    Hi,
    I tried importing a template in OVM 3.0.2
    It downloaded template into the following location:
    /OVS/Repositories/0004fb0000030000a2b97d3228ec6b0e/Templates/0004fb0000140000a2c11a7841dc15e6/tmp
    After import completed, .tgz template is not in tmp folder and
    it is not shown in GUI server pools -> Templates also.
    Can someone please help me.
    Thanks

    Yes, Jobs tab shows import operation failed.
    02/28/2012 01:36:37:868 PM) Async operation failed on server: xyz. Object: cfgFile_0004fb0000140000a2c11a7841dc15e6, PID: 26462, Server error message: Template import error: No VM configure file found.
    I am trying to import "Oracle VM 3 Templates (OVF) for Oracle Linux 6 Media Pack v2 for x86_64 (64 bit)"
    to create a guest domain.
    This package/template contains following files as mentioned in template readme file:
    OVM_OL6U1_X86_PVHVM.ova
    |- OVM_OL6U1_x86_PVHVM.mf
    |- OVM_OL6U1_x86_PVHVM.ovf
    |- System.img
    so clearly it does not have vm.cfg
    Am I importing a wrong template? I just want to create a domU
    and install Oracle Linux 6.
    Thanks

  • Is there a way to add GPS location after importing to Photos?

    I would like to add GPS location data to photos I take with my DLSR.  In the past I was able to copy location data from a photo taken on my iPhone I take and copy it to all my DLSR photos.
    Can this be done in Photos?

    I actually bought Greenfinder and vCaddy (Golf GPS apps both in app store) and did extensive testing yesterday. You can see my full review here.
    http://forums.macrumors.com/showpost.php?p=6056655&postcount=20
    Bottom line - Could be a great alternative to a SkyCaddy or similar device
    Message was edited by: dhy8386

  • Vld-1134 on pluggable mapping after importing MDL

    After importing an MDL file and setting the oracle modules to the correct locations, I can deploy just anything in my target module, except mappings that include a pluggable mapping that references a view, which is correctly deployed on the target module ( validation raises vld-1134). Referencing that view directly from any mapping doesn't raise any problem.
    Are there aditional configuration steps required to succesfully validate thoose mappings ?
    Thanks in advance,
    Pedro Almeida

    Can you try synchronizing (inbound) the view to the pluggable mapping? Then try validating.
    Cheers
    David

  • Losing Lookup Condition after import

    For transferring a project from development to test (and later production) we're using metadata export/import.
    After import into the 'test repository' we noticed that in a mapping the Lookup Condition of several Key Lookups are missing, in fact these Key Lookups are all using the same Table definition.
    Has anyone experienced this problem?
    And, of course, is there a cure?
    We're using
    Warehouse Builder 9.0.3.37.0
    Repository 9.0.3.0.1
    Oracle db. 9.2.0.2.1
    Regards,
    Peter van Leeuwen

    We resolved this issue.
    The problem occurs when table names are long. The original table names was 27 characters long. In a mapping we have 3 lookups all based on the same table.
    We lost the lookup condition in 2 cases, that is project export/import (from Development to Test) and copy mapping.
    We solved the problem by shortening the table name.
    Peter

  • I'm getting an error message after importing a table through Dreamweaver into a page developed in Muse.

    I'm getting a couple of messages after importing a table through Dreamweaver into a page developed in Muse. I have compared the code of the page with the table with the page before I inserted the table and cannot find anything missing. the web address is http://gourmetdreams.com/weekly-menu.html
    How can I clear this up?
    I also noticed that when I uploaded the file there was one error: "- error occurred - Access denied.  The file may not exist locally,  may be open in another program, or there could be a local permission problem."

    Thanks for your help! I imported the table into Muse, then exported the site as html and opened it in Dreamweaver. I should've tried that in the first place but gave up when I couldn't edit it in Muse. I didn't know you could edit the html right in Muse. Also, my styling and the images I was using weren't showing up in Muse but once I opened it in Dreamweaver, everything fell into place. AND I could edit it! My client needs to edit this page every week, so I needed to be sure she could do that. She doesn't use Muse but does use Dreamweaver. At this point, I think I'll leave it alone and not try to style it with inline css. Maybe when I get a little more time, I'll try it. I just don't want to mess things up now that the're working.

  • How do I import images from my hard drive without losing resolution? My original files after import are significantly smaller. What should I do?

    How do I import images from my hard drive without losing resolution? My original files after import are significantly smaller. What should I do?

    Hi Keith, and all others chiming in, I do have the correct option checked in advanced settings telling iPhoto to copy the images into the library. What is a refernced library? Perhaps this is where I am getting confused. I exported my entire photo library from an old iMac5 to my external hard drive, from there I attempted to import the entire library to my new iMac. Am I overlooking an obvious and easy way to import from the hard drive to the new iMac--I dragged the entire photo pholder from the hard drive to the open window of iPhoto on the new computer. Now, I only get preview file sizes in iPhoto, unless I have my external drive open. Perhaps I need to import the original images from the hard drive in a different way...?! (This is making me feel pretty stupid.)

  • OBIEE changes table column attributes after importing to rpd

    Hello guys
    Something interesting is happening in our OBIEE environment. There are a couple of tables we import into physical layer, after importing these tables, all the column data length gets to set to '0' and all nullable becomes 'false'. I have changed these table in the DB and there are all correct. I have also taken a copy of this rpd and test in my own local environment with the same connection pool setting, I was able to import those same tables from the same DB and the attributes remain correct.
    However, when we import these tables again in our rpd in unix environment, it is again overiding all the column data length to 0 and nullable to false..
    Any clues on how to investigate?

    Was it when it was still Siebel Analytics?
    I see that it is a known issue, Any luck recalling the solutions so far?

  • Problem in mapping xml data with imported RFC parameters

    I am currently working on a senario in which a flat file is generated by an RFID server and placed in FTP server.
    The flat file is picked up from the FTP server using XI and the contents are mapped to the corresponding imported RFC parameters.
    The content of the file which is in text format is successfully converted in XML at the XI side.
    File contains records of 2 fields Functional location and RFID equipment number . In the R/3 side these fields
    are used as Functional location and equipment number of PM module .
    The structure of the FTP message is as follows
    <ns:RFID_MSG_TYPE xmlns:ns="urn://sisl:rfiddemo">
    <RecordSet>
      <Row>
       <FL1>"f1</FL1>
       <FL2>01</FL2>
       <FL3>01</FL3>
       <RFID_NUM>I001"</RFID_NUM>
      </Row>
    </RecordSet>
    </ns:RFID_MSG_TYPE>
    After the mapping program which maps the above structure to the imported RFC is executed the following payload document is generated
    <ns:ZRFID_EQUIP xmlns:ns="urn:sap-com:document:sap:rfc:functions">
    <RECORDS>
      <item>
       <FLOC>f1-01-01</FLOC>
       <RFID_NO>I001</RFID_NO>
      </item>
    </RECORDS>
    </ns:ZRFID_EQUIP>
    The size of FLOC is 30 of type string and RFID_NO is also a string with size 18.
    When the data is brought in R/3 both the fields FLOC and RFID_NO gets mapped in FLOC which is of type char30.

    Hi Naveen,
    In sxmb_moni the content transmitted to the adapter(RFC)is as follows
    <?xml version="1.0" encoding="UTF-8" ?>
    - <ns:ZRFID_EQUIP xmlns:ns="urn:sap-com:document:sap:rfc:functions">
    - <RECORDS>
    - <item>
      <FLOC>f1-01-01</FLOC>
      <RFID_NO>I006</RFID_NO>
      </item>
    - <item>
      <FLOC>f1-01-02</FLOC>
      <RFID_NO>I002</RFID_NO>
      </item>
    - <item>
      <FLOC>f1-01-03</FLOC>
      <RFID_NO>I003</RFID_NO>
      </item>
    - <item>
      <FLOC>f1-01-04</FLOC>
      <RFID_NO>I004</RFID_NO>
      </item>
    - <item>
      <FLOC>f1-01-05</FLOC>
      <RFID_NO>I005</RFID_NO>
      </item>
    - <item>
      <FLOC>f1-01-06</FLOC>
      <RFID_NO>I001</RFID_NO>
      </item>
      </RECORDS>
      </ns:ZRFID_EQUIP>
    At r/3 side the field floc and rfid_no gets mapped to floc which is of char30
    eg floc=f1-01-01I006
       rfid_no=

  • Appset not appearing after import

    Hi Friends,
    We are on bpc75nw sp04.
    Custom appset(A) is correctly imported into target system. I'm able to see in backend(BW), but in fron-end(BPC admin/BPC excel) not appearing. Could any one suggest, what might be reason? How to know install user id of BPC system?
    Basis consultant, tried with different user ids like install user, & other users, but he couldn't see at server level as well client level.
    I checked uje_user table, it shows appset(A) avialable to  USER1.
    I hope with USER1 id only, we are able to see appset (A).
    Regards,
    Naresh

    Log:  import ended with warning.
       Start of the after-import method RS_APPS_AFTER_IMPORT for object type(s) APPS (Activation Mode)
       Start After Import for AppSet XXXX in Client 500 for RFC MDX PARSER
       Import Step UPDPTAB completed without errors
       Import Step ADMIN_DEF_UPD completed without errors
       Import Step APPS_ADD completed without errors
       After Import method for AppSet XXXX finished successfully
       Start of data checker messages
       The file service structure is correct.
    Dimension ACCOUNT's master is empty!
    Dimension ACCT's master is empty!
    Dimension CATEGORY's master is empty!
    Dimension CHANNEL's master is empty!
    Dimension COST_COMP's master is empty!
    Dimension C_ACCT's master is empty!
    Dimension C_CATEGORY's master is empty!
    Dimension TIME's master is empty!
       BPF: Validation error; No template access is defined for template "Sales Flow"
       BPF: Validation error; member item "REGION 2" is not in drive dimension "LOCATION"
       BPF: Validation error; hierarchy "PARENTH1" is not in drive dimension "LOCATION"
       BPF: Validation error; member item "REGION 1" is not in drive dimension "LOCATION"
       BPF: Validation error; hierarchy "PARENTH1" is not in drive dimension "LOCATION"
       BPF: Validation error; member item "REGION 2" is not in drive dimension "LOCATION"
       BPF: Validation error; hierarchy "PARENTH1" is not in drive dimension "LOCATION"
       BPF: Validation error; member item "REGION 1" is not in drive dimension "LOCATION"
       BPF: Validation error; hierarchy "PARENTH1" is not in drive dimension "LOCATION"
       BPF: Validation error; member item "INDIA" is not in drive dimension "LOCATION"
       BPF: Validation error; No template access is defined for template "SALES_PLANNING FLOW"
    End of data checker messages
       End of after import methode RS_APPS_AFTER_IMPORT (Activation Mode) - runtime: 00:19:35
       Start of the after-import method RS_APPS_AFTER_IMPORT for object type(s) APPS (Delete Mode)
       Nothing to delete.
       End of after import methode RS_APPS_AFTER_IMPORT (Delete Mode) - runtime: 00:00:00
       Post-import method RS_AFTER_IMPORT completed for APPS L, date and time: 20110207110512
       Post-import methods of change/transport request BQ1K900069 completed
            Start of subsequent processing ... 20110207104537
            End of subsequent processing... 20110207110512
       Execute reports for change/transport request: BQ1K900069
          on the application server: sparbdb
        Ended with return code:  ===> 4 <===

  • Changing default route after import route-target

    Hi there,
    Before I import route-target, the default route is set to 192.168.0.22 . After import the vrf, suddently it change to another PE, which is 192.168.0.19 . How do I force the default route to use 192.168.0.22 ?
    before adding route-target import 4000:1
    PE#sh ip route vrf customer 0.0.0.0
    Routing entry for 0.0.0.0/0, supernet
    Known via "bgp 100", distance 200, metric 0, candidate default path,
    type internal
    Last update from 192.168.0.22 00:14:08 ago
    Routing Descriptor Blocks:
    * 192.168.0.22 (Default-IP-Routing-Table), from 192.168.0.3, 00:14:08 ago
    Route metric is 0, traffic share count is 1
    AS Hops 0
    PE#sh ip bgp vpnv4 vrf customer 0.0.0.0
    BGP routing table entry for 100:239:0.0.0.0/0, version 335256
    Paths: (2 available, best #2, table customer)
    Not advertised to any peer
    Local
    192.168.0.22 (metric 4) from 192.168.0.45 (192.168.0.45)
    Origin incomplete, metric 0, localpref 100, valid, internal
    Extended Community: RT:100:120
    Originator: 192.168.0.50, Cluster list: 192.168.0.45
    Local
    192.168.0.22 (metric 4) from 192.168.0.3 (192.168.0.3)
    Origin incomplete, metric 0, localpref 100, valid, internal, best
    Extended Community: RT:100:120
    Originator: 192.168.0.50, Cluster list: 192.168.0.3
    after adding route-target import 4000:1
    PE#sh ip route vrf customer 0.0.0.0
    Routing entry for 0.0.0.0/0, supernet
    Known via "bgp 100", distance 200, metric 0, candidate default path,
    type internal
    Last update from 192.168.0.19 00:00:09 ago
    Routing Descriptor Blocks:
    * 192.168.0.19 (Default-IP-Routing-Table), from 192.168.0.3, 00:00:09 ago
    Route metric is 0, traffic share count is 1
    AS Hops 0
    PE#sh ip bgp vpnv4 vrf customer 0.0.0.0
    BGP routing table entry for 100:239:0.0.0.0/0, version 335386
    Paths: (3 available, best #1, table customer)
    Flag: 0x1820
    Not advertised to any peer
    Local, imported path from 4000:1:0.0.0.0/0
    192.168.0.19 (metric 2) from 192.168.0.3 (192.168.0.3)
    Origin incomplete, metric 0, localpref 100, valid, internal, best
    Extended Community: RT:4000:1
    Originator: 192.168.0.19, Cluster list: 192.168.0.3
    Local
    192.168.0.22 (metric 4) from 192.168.0.45 (192.168.0.45)
    Origin incomplete, metric 0, localpref 100, valid, internal
    Extended Community: RT:100:120
    Originator: 192.168.0.50, Cluster list: 192.168.0.45
    Local
    192.168.0.22 (metric 4) from 192.168.0.3 (192.168.0.3)
    Origin incomplete, metric 0, localpref 100, valid, internal
    Extended Community: RT:100:120
    Originator: 192.168.0.50, Cluster list: 192.168.0.3
    thanks in advance.
    maher

    Maher,
    Here's an example:
    router bgp xx
    address-family vpnv4
    nei x.x.x.x route-map localpref in
    ip extcommunity 1 permit rt 4000:1
    route-map localpref permit 10
    match extcommunity 1
    set local-preference 110
    route-map localpref permit 20
    BTW: if the route with RT 4000:1 had a different RD both routes would get imported in the VRF and you could set the local-pref using an import map instead of an inbound route-map on the VPNv4 session.
    Hope this helps,

  • Deployment hangs after importing metadata OWB 10gR2

    Hi folks,
    in development I created a new mapping with I exported/imported into test enviroment.
    I did this using "import all objects, update metadata, match by UID".
    When I try to deploy the imported mapping ControlCenter hangs showing the green arrow.
    In the DOS box where I started the client it says:
    </OBJECT>
    </OBJECT>
    </OBJECT>
    </NAMESPACE>
    RUNTIME_SERVICES_DEBUG [WhValidationGenerationTransaction] DeploymentUtils.isAda
    pterFCO(WBGeneratedObject): Not found objectTypeName CMPMap defaulting to FCO
    RUNTIME_SERVICES_DEBUG [WhValidationGenerationTransaction] DeploymentSpecificati
    onImpl.addToUnitSpecificationImplArrayList: created UnitSpecification for Deploy
    mentAdapterName DDLDeployment StoreUOID= 9B7415EC7AB94F54824DF5F57D502828 Action
    = 1 FCO list= M_FUNDSTRUCT_DWH,
    Runtime Parameters for : M_FUNDSTRUCT_DWH
    SYSTEM: DEFAULT_OPERATING_MODE: SET_BASED_FAIL_OVER_TO_ROW_BASED
    SYSTEM: DEFAULT_AUDIT_LEVEL: ERROR_DETAILS
    SYSTEM: MAXIMUM_NUMBER_OF_ERRORS: 50
    SYSTEM: COMMIT_FREQUENCY: 1000
    SYSTEM: BULK_SIZE: 1000
    SYSTEM: DEFAULT_PURGE_GROUP: WB
    SYSTEM: ANALYZE_TABLE_SAMPLE_PERCENTAGE: 5
    RUNTIME_SERVICES_DEBUG [WhValidationGenerationTransaction] ExecutionUtils.decode
    SystemParameter: Unexpected parameter ANALYZE_TABLE_SAMPLE_PERCENTAGE
    Notification (Unit): start VEVA_LOCATION (Erstellen) - 0 (Adapter=DDLDeployment
    Location=VEVA_LOCATION)
    MEM:GenerationValidationService:internalCompile:after transaction:U: 65237816%IN
    CR:2538296:INCREASE
    What is the problem?
    Michael

    The error is due to the corrupted MDL file. MDL is getting corrupted because there is not enough memory for java VM. To fix the MDL corruption increase memory parameters.
    1. On client PC: XXXX\owb\bin\win32\owbclient.bat change
    java -Xms64M -Xmx768M -Dlimit=768M -XX:MaxPermSize=256M
    to
    java -Xms64M -Xmx1024M -Dlimit=1024M -XX:MaxPermSize=512M
    Restart client session.
    It did fix the problem in my case.
    Oracle also recommends changing memory parameters on the server side (UNIX in my case)
    2.On server side XXXX/owb/bin/unix/run_service.sh
    $JAVAPATH/bin/java -Xmx768M
    to
    $JAVAPATH/bin/java -Xmx1024M

  • Unlogged Missing Photos After Import From Aperture

    Hi!
    I have just made the switch from Aperture to Lightroom, and have use the 1.1 version of the Aperture import plugin.
    In my Aperture Library I have, according to the Library -> Photos: 11105 Photos, however after importing to Lightroom, I have only 10967 photos. I have checked the import log, and there were 4 items which failed to import - 3 were .mpo files (panoramas from an xPeria) and 1 was a .gif file. This leaves a deficit of 133 photos that I can't account for.
    Is there any way to compare the aperture library to the lightroom library to see what is missing?

    *WARNING* Once agin, this is a VERY long post! And this contains not only SQL, but heaps of command line fun!
    TLDR Summary: Aperture is storing duplicates on disk (and referencing them in the DB) but hiding them in the GUI. Exactly how it does this, I'm not sure yet. And how to clean it up, I'm not sure either. But if you would like to know how I proved it, read on!
    An update on handling metadata exported from Aperture. Once you have a file, if you try to view it in the terminal, perhaps like this:
    $ less ApertureMetadataExtendedExport.txt
    "ApertureMetadataExtendedExport.txt" may be a binary file.  See it anyway?
    you will get that error. Turns out I was wrong, it's not (only?) due to the size of the file / line length; it's actually the file type Aperture creates:
    $ file ApertureMetadataExtendedExport.txt
    ApertureMetadataExtendedExport.txt: Little-endian UTF-16 Unicode text, with very long lines
    The key bit being "Little-endian UTF-16", that is what is causing the shell to think it's binary. The little endian is not surprising, after all it's an X86_64 platform. The UTF-16 though is not able to be handled by the shell. So it has to be converted. There are command line utils, but Text Wrangler does the job nicely.
    After conversion (to Unicode UTF-8):
    $ file ApertureMetadataExtendedExport.txt
    ApertureMetadataExtendedExport.txt: ASCII text, with very long lines
    and
    $ less ApertureMetadataExtendedExport.txt
    Version Name    Title   Urgency Categories      Suppl. Categories       Keywords        Instructions    Date Created    Contact Creator Contact Job Title       City    State/Province  Country Job Identifier  Headline        Provider        Source  Copyright Notice        Caption Caption Writer  Rating  IPTC Subject Code       Usage Terms     Intellectual Genre      IPTC Scene      Location        ISO Country Code        Contact Address Contact City    Contact State/Providence        Contact Postal Code     Contact Country Contact Phone   Contact Email   Contact Website Label   Latitude        Longitude       Altitude        AltitudeRef
    So, there you have it! That's what you have access to when exporting the metadata. Helpful? Well, at first glance I didn't think so - as the "Version Name" field is just "IMG_2104", no extension, no path etc. So if we have multiple images called "IMG_2104" we can't tell them apart (unless you have a few other fields to look at - and even then just comparing to the File System entries wouldn't be possible). But! In my last post, I mentioned that the Aperture SQLite DB (Library.apdb, the RKMasters table in particular) contained 11130 entries, and if you looked at the Schema, you would have noticed that there was a column called "originalVersionName" which should match! So, in theory, I can now create a small script to compare metadata with database and find my missing 25 files!
    First of all, I need to add that, when exporting metadata in Aperture, you need to select all the photos! ... and it will take some time! In my case TextWrangler managed to handle the 11108 line file without any problems. And even better, after converting, I was able to view the file with less. This is a BIG step on my last attempt.
    At this point it is worth pointing out that the file is tab-delimited (csv would be easier, of course) but we should be able to work with it anyway.
    To extract the version name (first column) we can use awk:
    $ cat ApertureMetadataExtendedExport.txt | awk -F'\t' '{print $1}' > ApertureMetadataVersionNames.txt
    and we can compare the line counts of both input and output to ensure we got everything:
    $ wc -l ApertureMetadataExtendedExport.txt
       11106 ApertureMetadataExtendedExport.txt
    $ wc -l ApertureMetadataVersionNames.txt
       11106 ApertureMetadataVersionNames.txt
    So far, so good! You might have noticed that the line count is 11106, not 11105, the input file has the header as I printed earlier. So we need to remove the first line. I just use vi for that.
    Lastly, the file needs to be sorted, so we can ensure we are looking in the same order when comparing the metadata version names with the DB version names.
    $ cat ApertureMetadataVersionNames.txt | sort > ApertureMetadataVersionNamesSorted.txt
    To get the Version Names from the DB, fire up sqlite3:
    $ sqlite3 Library.apdb
    sqlite> .output ApertureDBMasterVersionNames.txt
    sqlite> select originalVersionName from RKMaster;
    sqlite> .exit
    Checking the line count in the DB Output:
    $ wc -l ApertureDBMasterVersionNames.txt
       11130 ApertureDBMasterVersionNames.txt
    Brilliant! 11130 lines as expected. Then sort as we did before:
    $ cat ApertureDBMasterVersionNames.txt | sort > ApertureDBMasterVersionNamesSorted.txt
    So, now, in theory, running a diff on both files, should reveal the 25 missing files.... I must admit, I'm rather excited at this point!
    $ diff ApertureDBMasterVersionNamesSorted.txt ApertureMetadataVersionNamesSorted.txt
    IT WORKED! The output is a list of changes you need to make to the second input file to make it look the same as the first. Essentially, this will (in my case) show the Version Names that are missing in Aperture that are present on the File System.
    So, a line like this:
    1280,1281d1279
    < IMG_0144
    < IMG_0144
    basically just means, that there are IMG_0144 appears twice more in the DB than in the Metadata. Note: this is specific for the way I ordered the input files to diff; although you will get the same basic output if you reversed the input files to diff, the interpretation is obviously reversed) as shown here: (note in the first output, we have 'd' for deleted, and in the second output it's 'a' for added)
    1279a1280,1281
    > IMG_0144
    > IMG_0144
    In anycase, looking through my output and counting, I indeed have 25 images to investigate. The problem here is we just have a version name, fortunately in my output, most are unique with just a couple of duplicates. This leads me to believe that my "missing" files are actually Aperture handling duplicates (though why it's hiding them I'm not sure). I could, in my DB dump look at the path etc as well and that might help, but as it's just 25 cases, I will instead get a FS dump, and grep for the version name. This will give me all the files on the FS that match. I can then look at each and see what's happening.
    Dumping a list of master files from the FS: (execute from within the Masters directory of your Aperture library)
    $ find . -type f > ApertureFSMasters.txt
    This will be a list including path (relative to Master) which is exactly what we want. Then grep for each version name. For example:
    $ grep IMG_0144 ApertureFSMasters.txt
    ./2014/04/11/20140411-222634/IMG_0144.JPG
    ./2014/04/23/20140423-070845/IMG_0144 (1).jpg
    ./2014/04/23/20140423-070845/IMG_0144.jpg
    ./2014/06/28/20140628-215220/IMG_0144.JPG
    Here is a solid bit of information! On the FS i have 4 files called IMG_0144, yet if I look in the GUI (or metadata dump) I only have 2.
    $ grep IMG_0144 ApertureMetadataVersionNamesSorted.txt
    IMG_0144
    IMG_0144
    So, there is two files already!
    The path preceding the image in the FS dump, is the date of import. So I can see that two were imported at the same time, and two separately. The two that show up in the GUI have import sessions of 2014-06-28 @ 09:52:20 PM and 2014-04-11 @ 10:26:34 PM. That means that the first and last are the two files that show in the GUI, the middle two do not.... Why are they not in the GUI (yet are in the DB) and why do they have the exact same import date/time? I have no answer to that yet!
    I used open <filename> from the terminal prompt to view each file, and 3 out of my 4 are identical, and the fourth different.
    So, lastly, with a little command line fu, we can make a useful script to tell us what we want to know:
    #! /bin/bash
    grep $1 ApertureFSMasters.txt | sed 's|\.|Masters|' | awk '{print "<full path to Aperture Library folder>"$0}' | \
    while read line; do
      openssl sha1 "$line"
    done
    replace the <full path to Aperture Library folder> with the full path to you Aperture Library Folder, perhaps /volumes/some_disk_name/some_username/Pictures/.... etc. Then chmod 755 the script, and execute ./<scriptname> <version name> so something like
    $ ./calculateSHA.sh IMG_0144
    What we're doing here is taking in the version name we want to find (for example IMG_0144), and we are looking for it in the FS dump list. Remember that file contains image files relative to the Aperture Library Master path, which look something like "./YYYY/MM/DD/YYYYMMDD-HHMMSS/<FILENAME>" - we use sed to replace the "./" part with "Masters". Then we pipe it to awk, and insert the full path to aperture before the file name, the end result is a line which contains the absolute path to an image. There are several other ways to solve this, such as generating the FS dump from the root dir. You could also combine the awk into the sed (or the sed into the awk).. but this works. Each line is then passed, one at a time, to the openssl program to calculate the sha-1 checksum for that image. If a SHA-1 matches, then those files are identical (yes, there is a small chance of a collision in SHA-1, but it's unlikely!).
    So, at the end of all this, you can see exactly whats going on. And in my case, Aperture is storing duplicates on disk, and not showing them in the GUI. To be honest, I don't actually know how to clean this up now! So if anyone has any ideas. Please let me know I can't just delete the files on disk, as they are referenced in the DB. I guess it doesn't make too much difference, but my personality requires me to clean this up (at the very least to provide closure on this thread).
    The final point to make here is that, since Lightroom also has 11126 images (11130 less 4 non-compatible files). Then it has taken all the duplicates in the import.
    Well, that was a fun journey, and I learned a lot about Aperture in the process. And yes, I know this is a Lightroom forum and maybe this info would be better on the Aperture forum, I will probably update it there too. But there is some tie back to the Lightroom importer to let people know whats happening internally. (I guess I should update my earlier post, where I assumed the Lightroom Aperture import plugin was using the FS only, it *could* be using the DB as well (and probably is, so it can get more metadata))
    UPDATE: I jumped the gun a bit here, and based my conclusion on limited data. I have finished calculating the SHA-1 for all my missing versions. As well as comparing the counts in the GUI, to the counts in the FS. For the most part, where the GUI count is lower than the FS count, there is a clear duplicate (two files with the same SHA-1). However I have a few cases, where the FS count is higher, and all the images on disk have different SHA-1's! Picking one at random from my list; I have 3 images in the GUI called IMG_0843. On disk I have 4 files all with different SHA-1's. Viewing the actual images, 2 look the same, and the other 2 are different. So that matches 3 "unique" images.
    Using Preview to inspect the exif data for the images which look the same:
    image 1:
    Pixel X Dimension: 1 536
    Pixel Y Dimension: 2 048
    image 2:
    Pixel X Dimension: 3 264
    Pixel Y Dimension: 2 448
    (image 2 also has an extra Regions dictionary in the exit)
    So! These two images are not identical (we knew that from the SHA-1), but they are similar (content is the same - resolution is the same) yet Aperture is treating these as duplicates it seems.. that's not good! does this mean that if I resize an image for the web, and keep both, that Aperture won't show me both? (at least it keeps both on disk though, I guess...)
    The resolution of image 1, is suspiciously like the resolutions that were uploaded to (the original version of) iCloud Photos on the iPhone (one of the reasons I never used it). And indeed, the photo I chose at random here, is one that I have in an iCloud stored album (I have created a screensaver synced to iCloud, to use on my various Mac's and AppleTVs). Examining the data for the cloud version of the image, shows the resolution to be 1536x2048. The screensaver contains 22 images - I theorised earlier that these might be the missing images, perhaps I was right after all? Yet another avenue to explore.
    Ok. I dumped the screensaver metadata, converted it to UTF-8, grabbed the version names, and sorted them (just like before). Then compared them to the output of the diff command. Yep! the 22 screensaver images match to 22 / 25 missing images. The other 3, appear to be exact duplicates (same SHA-1) of images already in the library. That almost solves it! So then, can I conclude that Lightroom has imported my iCloud Screensaver as normal photos of lower res? In which case, it would likely do it for any shared photo source in Aperture, and perhaps it would be wise to turn that feature off before importing to Lightroom?

  • Apex Application not working after importing to my apps schema...

    Hi friends,
    I created one DB application in my sample schema that is associated with apex 4.0.....
    Application details:
    *) login page(where i will be giving username and password)
    *) page 1 (consist of several form fields, like
    --->name:
    --->module:
    --->projects:
    ----->email:
    the above are the fields, if i put any entries in the above field means, it will get automatically inserting in the report column, which is also in the same page consist of the above fields in the table manner...
    since this report table consist of an edit icon in front, if i clicked the edit icon of an one row means, it will go to another page..
    *) i.e. page 3(consist of the same fields with entries in it automatically corresponding to the each and every row, suppose if i want to update any changes means i can update in it....
    this is my application that i developed it is working well with in the sample schema apex 4.0..
    What i did is i created a new workspace with APPS schema in it...and i have imported my application that i developed in sample schema to APPS schema....
    Since after importing to APPS schema...when i tried to open the application, it is not showing any datas in it...That is due to the tables that are supporting the application is not in APPS schema, so what i did is i have given grant privilege to the respective tables and also i have created a synonym for accessing the table in APPS schema for supporting the application.....
    Now if i tried to put any entries in the form in page 2, it is getting inserting in the report column which is also in the page2....
    But my problem starts here, if i clicked the edit icon symbol in each and every row of the report column it is going to the page 3 which has a respective form fields, but it is not showing any entries in it automatically, and if i tried to put any entry in it, it is not getting updating in the report table......
    why this problem occurred for my application in APPS schema....But my application works very well within the sample schema......why it is not showing any entries in the form automatically soon after i clicked the edit icon in each and every row.....
    i couldn't know what is the real problem behind this..... help me friends.
    As this is my urgent requirement in my project..Reply me ASAP...
    Thanks in Advance..
    Regards,
    Harry...

    First, try a system reset although I can't give you any confidence.  It cures many ills and it's quick, easy and harmless...
    Hold down the on/off switch and the Home button simultaneously until you see the Apple logo.  Ignore the "Slide to power off" text if it appears.  You will not lose any apps, data, music, movies, settings, etc.
    If the Reset doesn't work, try a Restore.  Note that it's nowhere near as quick as a Reset.  It could take well over an hour!  Connect via cable to the computer that you use for sync.  From iTunes, select the iPad/iPod and then select the Summary tab.  Follow directions for Restore and be sure to say "yes" to the backup.  You will be warned that all data (apps, music, movies, etc.) will be erased but, as the Restore finishes, you will be asked if you wish the contents of the backup to be copied to the iPad/iPod.  Again, say "yes."
    At the end of the basic Restore, you will be asked if you wish to sync the iPad/iPod.  As before, say "yes."  Note that that sync selection will disappear and the Restore will end if you do not respond within a reasonable time.  If that happens, only the apps that are part of the IOS will appear on your device.  Corrective action is simple -  choose manual "Sync" from the bottom right of iTunes.
    If you're unable to do the Restore, go into Recovery Mode per the instructions here.

  • Cannot edit chapter names in iDVD after importing movie from iMovie

    I have no trouble customizing iDVD (version 5.0.1) menu page after importing iMovie directly (instead of going to QT as interim step). But the names of the chapter markers are file names for video clips, title pages and transitions. The iMovie help says to change the names in the space next to the 'thumbnails' of chapters found in iDVD. THERE ARE NO THUMBNAILS. And there is no way I could find, using menu options or clicking on the chapter frames shown in 'map' mode to type in anything.
    There are other problems, like getting rid of the default "Travel Cards' stuff...like I can drop a still photo in for a custom background, but the default 'celluloid design' banner still rolls up in previw mode. If you can also help with that, great. But the main thing right now is getting these chapter names change.

    VJK,
    You are the third person who has taken a crack at this. I don't know if you read my last reply which detailed the situation. But if you did and the 'menu' you refer to is the main menu page, that is not the problem; I can change text there. I am talking about the submenu for 'Scene Selections', which I have only been able to access via map mode.
    In map mode I cannot highlight the names. Period. Can't do it no matter where I place the cursor or carefully click.
    In preview mode, when the Scene Selection sub-menu page appears whenever you drag the cursor over a chapter name it automatically highlights so any clicking will either do nothing (if a real careful click, trying to highlight text and not trigger off play of the movie chapter) or just start play for the movie chapter.
    So...how does this newbie get to the 'menu' you mention where I can click on a chapter name in order to type new text?
    You will be able to change the names in the menu, but
    depending upon your mouse, etc, it can be a little
    particular.
    Place the cursor at the very beginning of the word
    you want to change. Click once .. this should hi-lite
    the word. Click again, it should make the word go
    blue (or in a blue box) then you can delete and
    re-type.
    Be sure to save the proj
    I have found that even after saving and successfully
    burning the project, if I return to it later the
    changes seem to have gone away ... so if you go back
    to it later, just double check that everything is
    kosher

Maybe you are looking for