Relations between schema s after import

Hello,
I have a schema that is related to others schema s, i use export to backup the database because the data is not huge and i am not expert DBA yet ,and i have 2 questions:
1- if i want to apply a previous backup to my database do i have to drop the schema and create another one by the same name then import the data to it , or importing without dropping is okay?
2-would this disable any relations with the other schema s or make any trigger or any other thing invalid?
thanks

Hi oasis!
To answer your first question:
If you use the IGNORE=y Parameter with imp then all Rows will be imported wether the Tableobject is already existing or not.
(look here:http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/exp_imp.htm#i1023533)
The answer to your second question is:
No there won't be no such problems after reimport of a backup.
Hope this help!
regards

Similar Messages

  • Apex Application not working after importing to my apps schema...

    Hi friends,
    I created one DB application in my sample schema that is associated with apex 4.0.....
    Application details:
    *) login page(where i will be giving username and password)
    *) page 1 (consist of several form fields, like
    --->name:
    --->module:
    --->projects:
    ----->email:
    the above are the fields, if i put any entries in the above field means, it will get automatically inserting in the report column, which is also in the same page consist of the above fields in the table manner...
    since this report table consist of an edit icon in front, if i clicked the edit icon of an one row means, it will go to another page..
    *) i.e. page 3(consist of the same fields with entries in it automatically corresponding to the each and every row, suppose if i want to update any changes means i can update in it....
    this is my application that i developed it is working well with in the sample schema apex 4.0..
    What i did is i created a new workspace with APPS schema in it...and i have imported my application that i developed in sample schema to APPS schema....
    Since after importing to APPS schema...when i tried to open the application, it is not showing any datas in it...That is due to the tables that are supporting the application is not in APPS schema, so what i did is i have given grant privilege to the respective tables and also i have created a synonym for accessing the table in APPS schema for supporting the application.....
    Now if i tried to put any entries in the form in page 2, it is getting inserting in the report column which is also in the page2....
    But my problem starts here, if i clicked the edit icon symbol in each and every row of the report column it is going to the page 3 which has a respective form fields, but it is not showing any entries in it automatically, and if i tried to put any entry in it, it is not getting updating in the report table......
    why this problem occurred for my application in APPS schema....But my application works very well within the sample schema......why it is not showing any entries in the form automatically soon after i clicked the edit icon in each and every row.....
    i couldn't know what is the real problem behind this..... help me friends.
    As this is my urgent requirement in my project..Reply me ASAP...
    Thanks in Advance..
    Regards,
    Harry...

    First, try a system reset although I can't give you any confidence.  It cures many ills and it's quick, easy and harmless...
    Hold down the on/off switch and the Home button simultaneously until you see the Apple logo.  Ignore the "Slide to power off" text if it appears.  You will not lose any apps, data, music, movies, settings, etc.
    If the Reset doesn't work, try a Restore.  Note that it's nowhere near as quick as a Reset.  It could take well over an hour!  Connect via cable to the computer that you use for sync.  From iTunes, select the iPad/iPod and then select the Summary tab.  Follow directions for Restore and be sure to say "yes" to the backup.  You will be warned that all data (apps, music, movies, etc.) will be erased but, as the Restore finishes, you will be asked if you wish the contents of the backup to be copied to the iPad/iPod.  Again, say "yes."
    At the end of the basic Restore, you will be asked if you wish to sync the iPad/iPod.  As before, say "yes."  Note that that sync selection will disappear and the Restore will end if you do not respond within a reasonable time.  If that happens, only the apps that are part of the IOS will appear on your device.  Corrective action is simple -  choose manual "Sync" from the bottom right of iTunes.
    If you're unable to do the Restore, go into Recovery Mode per the instructions here.

  • Invalid objects in APEX Schemas after import.

    Hi,
    After importing the APEX_040100 user in the database I got several invalid objects that are causing APEX not to work.
    I gave up to compile them. After using all kind of tricks such as compile one schema at a time or compile all of them using the utlprp.sql script
    or using EXEC DBMS_DDL.alter_compile('PACKAGE', 'MY_SCHEMA', 'MY_PACKAGE');
    or one object art a time using command sililar to:
    ALTER PACKAGE my_package COMPILE;
    ALTER PACKAGE my_package COMPILE BODY;
    ALTER PROCEDURE my_procedure COMPILE;
    ALTER FUNCTION my_function COMPILE;
    ALTER TRIGGER my_trigger COMPILE;
    ALTER VIEW my_view COMPILE;
    I can't find any documentation to show how to recompile them properly.
    So I am asking:
    1. How can we recompile all objects in APEX?
    2. How can we re-install APEX?
    3. How can we revert to previous version if both have problems?
    Thanks
    Yannis
    Here is the list of invalid objects:
    APEX_040100     PACKAGE     WWV_FLOW_DYNAMIC_EXEC     INVALID
    APEX_040100     PACKAGE     WWV_FLOW_LOAD_DATA     INVALID
    APEX_040100     PACKAGE     WWV_FLOW_SAMPLE_APP     INVALID
    APEX_040100     PACKAGE     WWV_FLOW_UTILITIES     INVALID
    APEX_040100     PACKAGE BODY     APEXWS     INVALID
    APEX_040100     PACKAGE BODY     HTMLDB_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_4000_UI     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_ADMIN_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_ADVISOR     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_AJAX     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_APPLICATION_INSTALL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_ASFCOOKIE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_AUDIT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_AUTHENTICATION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_AUTHENTICATION_ENGINE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_AUTHORIZATION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_BUILDER     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_BUTTON     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CALENDAR3     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CALENDAR_AJAX     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CHECK     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_COLLECTION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CONDITIONS     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_COPY_PAGE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CREATE_APP_FROM_QUERY     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CREATE_MODEL_APP     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CSS_API_PRIVATE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CUSTOM_AUTH     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_CUSTOM_AUTH_STD     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DATALOAD_XML     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DATA_QUICK_FLOW     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DATA_UPLOAD     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DICTIONARY     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DISP_PAGE_PLUGS     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DML     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DOWNLOAD     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DRAG_LAYOUT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_DYNAMIC_EXEC     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_ERROR     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_F4000_P4150     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_F4000_PLUGINS     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_F4000_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FILE_MGR     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FLASH_CHART     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FLASH_CHART2     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FLASH_CHART5     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FLASH_CHART5_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FLASH_CHART_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FND_DEVELOPER_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FND_USER_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FORMS     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_FORM_CONTROL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_GENERATE_DDL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_GENERATE_TABLE_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_GENERIC_ATTR     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_GEN_API2     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_HINT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_HTML_API_PRIVATE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_IMP_PARSER     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_INSTALL_WIZARD     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_INSTANCE_ADMIN     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_ITEM     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_JAVASCRIPT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_JOB     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_LANG     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_LDAP     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_LIST     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_LOAD_DATA     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_LOAD_EXCEL_DATA     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_LOGIN     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_MAIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_META_DATA     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_META_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_MODEL_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_NATIVE_AUTHENTICATION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_NATIVE_DYNAMIC_ACTION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_NATIVE_ITEM     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PAGE_CACHE_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PLSQL_EDITOR     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PLSQL_JOB     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PLUGIN_ENGINE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PLUGIN_F4000     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PLUGIN_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PPR_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PRINT_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PROCESS     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PROCESS_UTILITY     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PROVISION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_PROVISIONING     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_QUERY_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_REGEXP     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_REGION_LAYOUT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_RENDER_QUERY     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_RENDER_SHORTCUT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_REST     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SAMPLE_APP     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SC_TRANSACTIONS     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SECURITY     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SERIES_ATTR     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SESSION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SESSION_MON     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SVG     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SW_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SW_PAGE_CALLS     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SW_PARSER     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SW_SCRIPT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_SW_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_TABLE_DRILL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_TAB_MGR     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_TEAM     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_TEAM_GEN_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_TEMPLATES_UTIL     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_THEME_FILES     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_THEME_MANAGER     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_TREE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_TREE_REGION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_UPGRADE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_UPGRADE_APP     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_UTILITIES     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_VALIDATION     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WEBSERVICES_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WEB_SERVICES     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WIZARD_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WIZ_CONFIRM     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WORKSHEET     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WORKSHEET_AJAX     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WORKSHEET_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WORKSHEET_DIALOGUE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WORKSHEET_EXPR     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WORKSHEET_FORM     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WORKSHEET_STANDARD     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WORKSPACE_REPORTS     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_ATTACHMENT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_DIALOG     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_EXPORT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_FLASH_CHART     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_FORM     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_GEOCODE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_IMPORT     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_IMPORT_API     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_SECURITY     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_SETUP     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_STICKIES     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_UI     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_WS_WEBPAGE     INVALID
    APEX_040100     PACKAGE BODY     WWV_FLOW_XLIFF     INVALID
    APEX_040100     PACKAGE BODY     WWV_META_CLEANUP     INVALID
    APEX_040100     PACKAGE BODY     WWV_MIG_ACC_LOAD     INVALID
    APEX_040100     PACKAGE BODY     WWV_MIG_FRMMENU_LOAD_XML     INVALID
    APEX_040100     PACKAGE BODY     WWV_MIG_FRM_LOAD_XML     INVALID
    APEX_040100     PACKAGE BODY     WWV_MIG_FRM_OLB_LOAD_XML     INVALID
    APEX_040100     PACKAGE BODY     WWV_MIG_FRM_UPDATE_APX_APP     INVALID
    APEX_040100     PACKAGE BODY     WWV_MIG_FRM_UTILITIES     INVALID
    APEX_040100     PACKAGE BODY     WWV_MIG_RPT_LOAD_XML     INVALID
    APEX_040100     PACKAGE BODY     WWV_RENDER_CALENDAR2     INVALID
    APEX_040100     PACKAGE BODY     WWV_RENDER_CHART2     INVALID
    APEX_040100     PACKAGE BODY     WWV_RENDER_REPORT3     INVALID
    APEX_040100     PROCEDURE     APEX_ADMIN     INVALID
    APEX_040100     PROCEDURE     F     INVALID
    APEX_040100     PROCEDURE     HTMLDB_ADMIN     INVALID
    APEX_040100     PROCEDURE     WS     INVALID
    APEX_040100     SYNONYM     APEX_COLLECTIONS     INVALID
    APEX_040100     SYNONYM     HTMLDB_COLLECTIONS     INVALID
    APEX_040100     TRIGGER     WWV_FLOW_FEEDBACK_T1     INVALID
    APEX_040100     VIEW     WWV_FLOW_ADVISOR_RESULT     INVALID
    APEX_040100     VIEW     WWV_FLOW_COLLECTIONS     INVALID
    APEX_040100     VIEW     WWV_FLOW_SEARCH_RESULT     INVALID
    APEX_040100     VIEW     WWV_MULTI_COMPONENT_EXPORT     INVALID

    yannisr wrote:
    Hi,
    After importing the APEX_040100 user in the database I got several invalid objects that are causing APEX not to work.Hi,
    You mean you did export APEX_040100 schema from one database and import it to another database?
    There is also public synonyms and if I recall correct some objects are in SYS schema that belongs to Apex and are needed.
    Regards,
    Jari
    http://dbswh.webhop.net/dbswh/f?p=BLOG:HOME:0

  • I'm using a Canon HV20 video camera that uses digital tape. After importing to my Events Library in Final Cut X, there are some timecode breaks between clips resulting also in gaps between the clips. Is there a way to avoid this?

    I'm using a Canon HV20 video camera that uses digital tape. After importing to my Events Library in Final Cut X, there are some timecode breaks between clips resulting also in gaps between the clips. Is there a way to avoid this?

    Thanks Russ.  I'm a beginner but Lynda.com has a video on making an archive, so I'll follow her video to try this method.

  • "Messages" problem after importing .xsd file as external definition

    Hello,
    I received an .xsd file from a customer and need to import it as an "External Definition" in order to create the "Message Interface". The structure of the xsd looks like this:
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:px="http://www.opentrans.org/XMLSchema/1.0" targetNamespace="http://www.opentrans.org/XMLSchema/1.0" elementFormDefault="qualified">
    <xsd:element name="ORDER">
    </xsd:element>
    <xsd:element name="ADDRESS">
    </xsd:element>
    <xsd:element name="ARTICLE_ID">
    </xsd:element>
    </xsd:schema>
    After importing and looking at the tab "Messages" I get numerous entries, for each <xsd:element> I get one message! But I basically only need one "Message" that holds my complete xsd-file.
    I tried inserting <xsd:element name="COMPLETEORDER"> right after the <xsd:schema>-Tag but that didn't work either. Somehow I need to sum up all the <xsd:elements>.
    Does anyone have an idea? Thank you very much!
    Peter

    Hello Prateek, Hello everyone,
    I now know what the problem is. I downloaded XMLspy and checked on the structure:
    <?xml version="1.0" encoding="ISO-8859-1"?>
    <xsd:schema xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:px="http://www.opentrans.org/XMLSchema/1.0" targetNamespace="http://www.opentrans.org/XMLSchema/1.0" elementFormDefault="qualified">
    <xsd:element name="ORDER">
    </xsd:element>
    <xsd:element name="ADDRESS">
    </xsd:element>
    <xsd:element name="ARTICLE_ID">
    </xsd:element>
    <xsd:simpleType name="dtBOOLEAN">
    </xsd:simpleType>
    <xsd:simpleType name="dtCOUNTRIES">
    </xsd:simpleType>
    <xsd:element name="SUPPLIER_AID">
    </xsd:element>
    <xsd:simpleType name="typeSUPPLIER_AID">
    </xsd:simpleType>
    </xsd:schema>
    Between the long list of <xsd:element> tags there are some simpleTypes on the same
    level. Now if I insert
    <xsd:element name="COMPLETEORDER">
    <xsd:complexType>
    <xsd:sequence>
    on top, the <xsd:sequence> would be on the same level as the simpleTypes - which is not valid!
    But can I just move all the simpleTypes, e.g. into an <xsd:element>??
    That would be changing the customer's structure which I think is not a good thing!?
    Thank you again for your help! I really appreciate it!
    Best regards,
    Peter

  • Tool for viewing the relation between tables.

    Hi,
    I imported the schema dump in Oracle 10g R2 DB, there are about 800 plus tables in that schema. I would like to know the relataion between tables.So,What tools are available for viewing the relation between tables?. I have TOAD, and Oracles SQL Developer tolls, if there are any free tools for the same, please let me know the URLs.
    Regards,
    Sabdar Syed.

    Hmmm, for free, except what gave Eric earlier, not easy...
    Here already discuss about that :
    Re: Table Dependencies
    Nicolas.

  • What are the Relations between Journalizing and IKM?

    What is the best method to use in the following scenario:
    I have about 20 source tables with large amount of data.
    I need to create interfaces that join the source tables into target tables.
    The source tables are inserted every few secondes with about hundreds to thousands rows.
    There can be a gap of few seconds between the insert of different tables that sould be joined.
    The source and target tables are on the same Oracle instance and schema.
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?
    In general What are the relations between 'Journalizing' and 'IKM'?
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?
    I want to understand what is the role of 'Journalizing CDC'?
    Can 'IKM - Incremental Update' work without 'Journalizing'?
    Does 'Journalizing' need to have PK on the tables?
    What should i do if i can't put PK (there can be multiple identical rows)?
    Thanks in advance Yael

    Hi Yael,
    I will try and answer as many of your points as I can in one post :-)
    Journalizing is way of tracking only changed data in your source system, if your source tables had a date_modified you could always use this as a filter when scanning for changes rather than CDC, Log based CDC (Asynchronous in ODI, Logminer/Streams or Goldengate for example) removes the overhead of of placing a trigger on the source table to track changes but be aware that it doesnt fully remove the need to scan the source tables, in answer to you question about Primary keys, Oracle CDC with ODI will create an unconditional log group on the columns that you have defined in ODI as your PK, the PK columns are tracked by the database and presented in a Journal table (J$<source_table_name>) this Journal table is joined back to source table via a journalizing view (JV$<source_table_name>) to get the rest of the row (ie none PK columns) - So be aware that when ODI comes around to get all data in the Journalizing view (ie Inserts, Updates and Deletes) the source database performs a join back to the source table. You can negate this by specifying ALL source table columns in your PK in ODI - This forces all columns into the unconditional log group, the journal table etc. - You will need to tweak the JKM to then change the syntax sent to the database when starting the journal - I have done this in the past, using a flexfield in the datastore to toggle 'Full Column' / 'Primary Key Cols' to go into the JKM set up (there are a few Ebusiness suite tables with no primary key so we had to do this) - The only problem with this approach is that with no PK , you need to make sure you only get the 'last' update and in the right order to apply to your target tables, without so , you might process the update before the insert for example, and be out of sync.
    So JKM's provide a mechanism for 'Change data only' to be provided to ODI, if you want to handle deletes in your source table CDC is usefull (otherwise you dont capture the delete with a normal LKM / IKM set up)
    IKM Incremental update can be used with or without JKM's, its for integrating data into your target table, typically it will do a NOT EXISTS or a Minus when loading the integration table (I$<target_table_name>) to ensure you only get 'Changed' rows on the load into the target.
    user604062 wrote:
    I want to understand the role of: 'Journalizing CDC' and 'IKM - Incremental Update' and
    how can i use it in my scenario?Hopefully I have explained it above, its the type of thing you really need to play around with, and throroughly review the operator logs to see what is actually going on (I think this is a very good guide to setting it up : http://soainfrastructure.blogspot.ie/2009/02/setting-up-oracle-data-integrator-odi.html)
    In general What are the relations between 'Journalizing' and 'IKM'?JKM simply presents (only) changed data to ODI, it removes the need for you to decide 'how' to get the updates and removes the need for costly scans on the source table (full source to target table comparisons, scanning for updates based on last update date etc)
    Should i use both of them? Or maybe it is better to deelte and insert to the target tables?Delete and insert into target is fine , but ask yourself how do you identify which rows to process, inserts and updates are generally OK , to spot a delete you need to compare the table in full, target table minus source table = deleted rows , do you want to copy the whole source table every time to perform this ? Are they in the same database ?
    I want to understand what is the role of 'Journalizing CDC'?Its the ODI mechanism for configuring, starting, stopping the change data capture process in the source systems , there are different KM's for seperate technologies and a few to choose for Oracle (Triggers (Synchronous), Streams / Logminer (Asynchronous), Goldengate etc)
    Can 'IKM - Incremental Update' work without 'Journalizing'?Yes of course, Without CDC your process would look something like :
    Source target ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    With CDC your process looks like :
    Source Journal (J$ table with JV$ view) ----< LKM >---- Collection table (C$) ----<IKM>---- Integration table (I$) -----< IKM >---- Target table
    as you can see its the same process after the source table (there is an option in the interface to enable the J$ source , the IKM step changes with CDC as you can use 'Synchronise Journal Deletes'
    Does 'Journalizing' need to have PK on the tables?Yes - at least a logical PK in the datastore, see my reply at the top for reasons why (Log Groups, joining back the J$ table to the source table etc)
    What should i do if i can't put PK (there can be multiple identical rows)? Either talk to the source system people about adding one, or be prepared to change the JKM (and maybe LKM, IKM's) , you can try putting all columns in the PK in ODI. Ask yourself this , if you have 10 identical rows in your source and target tables, and one row gets updated - how can you identify which row in the target table to update ?
    >
    Thanks in advance YaelA lot to take in, as I advised I would reccomend you get a little test area set up and also read the Oracle database documentation on CDC as it covers a lot of the theory that ODI is simply implementing.
    Hope this helps!
    Alastair

  • Where are all the photos after import?

    I don't know where to begin. iPhoto has gotten worse and worse over the years, but unfortunately I have tens of thousands of photos in it going back decades.
    The tipping point for me was after photo streams and extra syncing was added to iOS 5. After importing to iPhoto I simply don't know where all my photos are anymore.
    The import process itself is all funny. Sometimes it seems to just import the photo stream - which doesn't include videos.
    Sometimes it will import everything if I press Import again, but the numbers don't match the numbers of photos on my iPhone.
    If I check latest import it doesn't match what was just imported.
    If I select to delete photos on my iPhone after import it deletes some but not all of them.
    I don't have any confidence anymore that the photos imported. And organizing them - forget about it. It's the worse mess I've ever seen. I can't find anything anymore.
    It definitely doesn't "just work."
    I would like to clear out the photos on my iPhone 4 because the camera has gotten slow and I read somewhere that having too many photos in the camera roll can slow down the camera operation itself. Is that true? That would be sort of dumb if true because the only way of making albums directly on your iPhone is to leave the original photos in the camera roll.
    Is there anything better than iPhone I can use to organize all my photos?
    Thanks,
    doug

    Well, I do like that Photostream syncs between my iPhone and iPad. What I don't understand is why the import into iPhone seems to sometimes have just Photostream events and then separately events sorted by date, but without the items that were imported into Photostream events.
    Rather than turn off Photostream altogether, what would be nice is if I could just turn it off in iWeb. Is that possible?
    I'm sorry my "finding and organizing" complaint sounds vague. It's hard to explain clearly because everything is so convoluted in iPhoto now.
    Basically, it used to be that
    (1) I could import and everything would be broken up into events by date;
    (2) I could delete from my iPhone after import and everything would be deleted.
    But (1) is not happening reliably. And neither is (2).
    I just want to be able to organize my photos and find them. Even finding them in the current version of iPhoto got harder than it was in the previous version of iPhoto. It's just become a really confusing mess of a piece of software to use.
    Right now the immediate problem, to attempt to be clearer, is that my imported photos and movies are not broken down by date-delimited events. The problem seems to be related to Photostream imports which confuse the normal event grouping process.
    Thanks,
    doug

  • Unlogged Missing Photos After Import From Aperture

    Hi!
    I have just made the switch from Aperture to Lightroom, and have use the 1.1 version of the Aperture import plugin.
    In my Aperture Library I have, according to the Library -> Photos: 11105 Photos, however after importing to Lightroom, I have only 10967 photos. I have checked the import log, and there were 4 items which failed to import - 3 were .mpo files (panoramas from an xPeria) and 1 was a .gif file. This leaves a deficit of 133 photos that I can't account for.
    Is there any way to compare the aperture library to the lightroom library to see what is missing?

    *WARNING* Once agin, this is a VERY long post! And this contains not only SQL, but heaps of command line fun!
    TLDR Summary: Aperture is storing duplicates on disk (and referencing them in the DB) but hiding them in the GUI. Exactly how it does this, I'm not sure yet. And how to clean it up, I'm not sure either. But if you would like to know how I proved it, read on!
    An update on handling metadata exported from Aperture. Once you have a file, if you try to view it in the terminal, perhaps like this:
    $ less ApertureMetadataExtendedExport.txt
    "ApertureMetadataExtendedExport.txt" may be a binary file.  See it anyway?
    you will get that error. Turns out I was wrong, it's not (only?) due to the size of the file / line length; it's actually the file type Aperture creates:
    $ file ApertureMetadataExtendedExport.txt
    ApertureMetadataExtendedExport.txt: Little-endian UTF-16 Unicode text, with very long lines
    The key bit being "Little-endian UTF-16", that is what is causing the shell to think it's binary. The little endian is not surprising, after all it's an X86_64 platform. The UTF-16 though is not able to be handled by the shell. So it has to be converted. There are command line utils, but Text Wrangler does the job nicely.
    After conversion (to Unicode UTF-8):
    $ file ApertureMetadataExtendedExport.txt
    ApertureMetadataExtendedExport.txt: ASCII text, with very long lines
    and
    $ less ApertureMetadataExtendedExport.txt
    Version Name    Title   Urgency Categories      Suppl. Categories       Keywords        Instructions    Date Created    Contact Creator Contact Job Title       City    State/Province  Country Job Identifier  Headline        Provider        Source  Copyright Notice        Caption Caption Writer  Rating  IPTC Subject Code       Usage Terms     Intellectual Genre      IPTC Scene      Location        ISO Country Code        Contact Address Contact City    Contact State/Providence        Contact Postal Code     Contact Country Contact Phone   Contact Email   Contact Website Label   Latitude        Longitude       Altitude        AltitudeRef
    So, there you have it! That's what you have access to when exporting the metadata. Helpful? Well, at first glance I didn't think so - as the "Version Name" field is just "IMG_2104", no extension, no path etc. So if we have multiple images called "IMG_2104" we can't tell them apart (unless you have a few other fields to look at - and even then just comparing to the File System entries wouldn't be possible). But! In my last post, I mentioned that the Aperture SQLite DB (Library.apdb, the RKMasters table in particular) contained 11130 entries, and if you looked at the Schema, you would have noticed that there was a column called "originalVersionName" which should match! So, in theory, I can now create a small script to compare metadata with database and find my missing 25 files!
    First of all, I need to add that, when exporting metadata in Aperture, you need to select all the photos! ... and it will take some time! In my case TextWrangler managed to handle the 11108 line file without any problems. And even better, after converting, I was able to view the file with less. This is a BIG step on my last attempt.
    At this point it is worth pointing out that the file is tab-delimited (csv would be easier, of course) but we should be able to work with it anyway.
    To extract the version name (first column) we can use awk:
    $ cat ApertureMetadataExtendedExport.txt | awk -F'\t' '{print $1}' > ApertureMetadataVersionNames.txt
    and we can compare the line counts of both input and output to ensure we got everything:
    $ wc -l ApertureMetadataExtendedExport.txt
       11106 ApertureMetadataExtendedExport.txt
    $ wc -l ApertureMetadataVersionNames.txt
       11106 ApertureMetadataVersionNames.txt
    So far, so good! You might have noticed that the line count is 11106, not 11105, the input file has the header as I printed earlier. So we need to remove the first line. I just use vi for that.
    Lastly, the file needs to be sorted, so we can ensure we are looking in the same order when comparing the metadata version names with the DB version names.
    $ cat ApertureMetadataVersionNames.txt | sort > ApertureMetadataVersionNamesSorted.txt
    To get the Version Names from the DB, fire up sqlite3:
    $ sqlite3 Library.apdb
    sqlite> .output ApertureDBMasterVersionNames.txt
    sqlite> select originalVersionName from RKMaster;
    sqlite> .exit
    Checking the line count in the DB Output:
    $ wc -l ApertureDBMasterVersionNames.txt
       11130 ApertureDBMasterVersionNames.txt
    Brilliant! 11130 lines as expected. Then sort as we did before:
    $ cat ApertureDBMasterVersionNames.txt | sort > ApertureDBMasterVersionNamesSorted.txt
    So, now, in theory, running a diff on both files, should reveal the 25 missing files.... I must admit, I'm rather excited at this point!
    $ diff ApertureDBMasterVersionNamesSorted.txt ApertureMetadataVersionNamesSorted.txt
    IT WORKED! The output is a list of changes you need to make to the second input file to make it look the same as the first. Essentially, this will (in my case) show the Version Names that are missing in Aperture that are present on the File System.
    So, a line like this:
    1280,1281d1279
    < IMG_0144
    < IMG_0144
    basically just means, that there are IMG_0144 appears twice more in the DB than in the Metadata. Note: this is specific for the way I ordered the input files to diff; although you will get the same basic output if you reversed the input files to diff, the interpretation is obviously reversed) as shown here: (note in the first output, we have 'd' for deleted, and in the second output it's 'a' for added)
    1279a1280,1281
    > IMG_0144
    > IMG_0144
    In anycase, looking through my output and counting, I indeed have 25 images to investigate. The problem here is we just have a version name, fortunately in my output, most are unique with just a couple of duplicates. This leads me to believe that my "missing" files are actually Aperture handling duplicates (though why it's hiding them I'm not sure). I could, in my DB dump look at the path etc as well and that might help, but as it's just 25 cases, I will instead get a FS dump, and grep for the version name. This will give me all the files on the FS that match. I can then look at each and see what's happening.
    Dumping a list of master files from the FS: (execute from within the Masters directory of your Aperture library)
    $ find . -type f > ApertureFSMasters.txt
    This will be a list including path (relative to Master) which is exactly what we want. Then grep for each version name. For example:
    $ grep IMG_0144 ApertureFSMasters.txt
    ./2014/04/11/20140411-222634/IMG_0144.JPG
    ./2014/04/23/20140423-070845/IMG_0144 (1).jpg
    ./2014/04/23/20140423-070845/IMG_0144.jpg
    ./2014/06/28/20140628-215220/IMG_0144.JPG
    Here is a solid bit of information! On the FS i have 4 files called IMG_0144, yet if I look in the GUI (or metadata dump) I only have 2.
    $ grep IMG_0144 ApertureMetadataVersionNamesSorted.txt
    IMG_0144
    IMG_0144
    So, there is two files already!
    The path preceding the image in the FS dump, is the date of import. So I can see that two were imported at the same time, and two separately. The two that show up in the GUI have import sessions of 2014-06-28 @ 09:52:20 PM and 2014-04-11 @ 10:26:34 PM. That means that the first and last are the two files that show in the GUI, the middle two do not.... Why are they not in the GUI (yet are in the DB) and why do they have the exact same import date/time? I have no answer to that yet!
    I used open <filename> from the terminal prompt to view each file, and 3 out of my 4 are identical, and the fourth different.
    So, lastly, with a little command line fu, we can make a useful script to tell us what we want to know:
    #! /bin/bash
    grep $1 ApertureFSMasters.txt | sed 's|\.|Masters|' | awk '{print "<full path to Aperture Library folder>"$0}' | \
    while read line; do
      openssl sha1 "$line"
    done
    replace the <full path to Aperture Library folder> with the full path to you Aperture Library Folder, perhaps /volumes/some_disk_name/some_username/Pictures/.... etc. Then chmod 755 the script, and execute ./<scriptname> <version name> so something like
    $ ./calculateSHA.sh IMG_0144
    What we're doing here is taking in the version name we want to find (for example IMG_0144), and we are looking for it in the FS dump list. Remember that file contains image files relative to the Aperture Library Master path, which look something like "./YYYY/MM/DD/YYYYMMDD-HHMMSS/<FILENAME>" - we use sed to replace the "./" part with "Masters". Then we pipe it to awk, and insert the full path to aperture before the file name, the end result is a line which contains the absolute path to an image. There are several other ways to solve this, such as generating the FS dump from the root dir. You could also combine the awk into the sed (or the sed into the awk).. but this works. Each line is then passed, one at a time, to the openssl program to calculate the sha-1 checksum for that image. If a SHA-1 matches, then those files are identical (yes, there is a small chance of a collision in SHA-1, but it's unlikely!).
    So, at the end of all this, you can see exactly whats going on. And in my case, Aperture is storing duplicates on disk, and not showing them in the GUI. To be honest, I don't actually know how to clean this up now! So if anyone has any ideas. Please let me know I can't just delete the files on disk, as they are referenced in the DB. I guess it doesn't make too much difference, but my personality requires me to clean this up (at the very least to provide closure on this thread).
    The final point to make here is that, since Lightroom also has 11126 images (11130 less 4 non-compatible files). Then it has taken all the duplicates in the import.
    Well, that was a fun journey, and I learned a lot about Aperture in the process. And yes, I know this is a Lightroom forum and maybe this info would be better on the Aperture forum, I will probably update it there too. But there is some tie back to the Lightroom importer to let people know whats happening internally. (I guess I should update my earlier post, where I assumed the Lightroom Aperture import plugin was using the FS only, it *could* be using the DB as well (and probably is, so it can get more metadata))
    UPDATE: I jumped the gun a bit here, and based my conclusion on limited data. I have finished calculating the SHA-1 for all my missing versions. As well as comparing the counts in the GUI, to the counts in the FS. For the most part, where the GUI count is lower than the FS count, there is a clear duplicate (two files with the same SHA-1). However I have a few cases, where the FS count is higher, and all the images on disk have different SHA-1's! Picking one at random from my list; I have 3 images in the GUI called IMG_0843. On disk I have 4 files all with different SHA-1's. Viewing the actual images, 2 look the same, and the other 2 are different. So that matches 3 "unique" images.
    Using Preview to inspect the exif data for the images which look the same:
    image 1:
    Pixel X Dimension: 1 536
    Pixel Y Dimension: 2 048
    image 2:
    Pixel X Dimension: 3 264
    Pixel Y Dimension: 2 448
    (image 2 also has an extra Regions dictionary in the exit)
    So! These two images are not identical (we knew that from the SHA-1), but they are similar (content is the same - resolution is the same) yet Aperture is treating these as duplicates it seems.. that's not good! does this mean that if I resize an image for the web, and keep both, that Aperture won't show me both? (at least it keeps both on disk though, I guess...)
    The resolution of image 1, is suspiciously like the resolutions that were uploaded to (the original version of) iCloud Photos on the iPhone (one of the reasons I never used it). And indeed, the photo I chose at random here, is one that I have in an iCloud stored album (I have created a screensaver synced to iCloud, to use on my various Mac's and AppleTVs). Examining the data for the cloud version of the image, shows the resolution to be 1536x2048. The screensaver contains 22 images - I theorised earlier that these might be the missing images, perhaps I was right after all? Yet another avenue to explore.
    Ok. I dumped the screensaver metadata, converted it to UTF-8, grabbed the version names, and sorted them (just like before). Then compared them to the output of the diff command. Yep! the 22 screensaver images match to 22 / 25 missing images. The other 3, appear to be exact duplicates (same SHA-1) of images already in the library. That almost solves it! So then, can I conclude that Lightroom has imported my iCloud Screensaver as normal photos of lower res? In which case, it would likely do it for any shared photo source in Aperture, and perhaps it would be wise to turn that feature off before importing to Lightroom?

  • What is the relation between delivery num , sales order num and invoice

    what is the relation between delivery num , sales order num and invoice

    Look at VBFA Table
    goto VBFA table ,enter order number number vbelv ,vbtyp_n is C,then VBELN is the delivery
    if you enter delivery number in vbelv ,vbtyp_n is J,then vbeln is Invoice.
    here VBTYP_N is the import.
    VBFA is the sales document flow table,and very important table
    Thanks
    seshu

  • DBAdapter Creating Relations between tables not having PF-FK relationship

    I am writing a process which has to pull data from three tables.
    SELLER_HEADER (inv_Num is Primary Key)
    SELLER_LINE_ITEMS ( no PK but has inv_num and line_number which together are unique)
    BUYER_LINE_ITEMS (no PK but has cust_num, cust_PO_num and line_num that are unique)
    I want to create a DB Adapter which would take in an invoice number, customerNumber and customerPONumber
    and fetch me data whose XSD is roughly in this structure
    One Node of Type Header
    ---- Column 1 of the Header Table
    ---- Column 2 of the Header Table
    ---- Column 3 of the Header Table
    ... and so on
    Multiple Nodes of
    ---- Column 1 of the Seller Table
    ---- Column 2 of the Seller Table
    ---- Column 3 of the Seller Table
    and
    ---- Column 1 of the Buyer Table
    ---- Column 2 of the Buyer Table
    ---- Column 3 of the Buyer Table
    I tried some combinations and found that if you have a 1:M mapping for Header - Seller Line Item
    and a 1:1 mapping between Seller Line Item and Buyer Line Item then i get the desired XSD
    So, I created a 1:M relation between header and Seller lineItems. However i cannot create a one to one mapping between Seller Line Items and Buyer line items. Nor can i create a 1 :M mapping between Seller Header and Buyer Line Items. That is why the generated XSD shows
    <SomeCollectionName>
    <SellerHeader>
    <BuyerLineItems>
    <SellerLineItems>
    </SomeCollectionName>
    Any pointers for this? How do i make a relation based on the input values to the DBAdapter

    Does the CORE_BUSINESS schema have REFERENCES and SELECT privileges on the table you are trying to reference with the foreign key constraint?

  • Error after import external webservice(RFC) wsdl url to Process Composer

    Hi all,
    I try to use RFC webservice in my BPM as below:
    1. Expose RFC as webservice using CAF (import external service RFC and then create application service use this external service)
    2. Define Destination in NWA.
    3. Create a Process Composer project, and import the external webservice(RFC) wsdl file as service interface in the project.
    After importing, i get error : the port type specified for the ...binding is undefined. Check port type name and ensure it is defined.
    If i import another external service, not RFC (such as business object), there is no error.
    My system is NWCE 7.11
    Thanks in advance,
    Sinh.
    Edited by: Sinh Nguyen Van on Jul 20, 2009 8:29 AM

    Hi Bharath,
    Below is content of wsdl url and error message, thanks
    Error message:
    The 'zfm_rfc_caf_as' port type specified for the 'zfm_rfc_caf_asBinding' binding is undefined. Check the 'zfm_rfc_caf_as' port type name and ensure it is defined.
    wsdl url :
    - <definitions xmlns="http://schemas.xmlsoap.org/wsdl/" targetNamespace="http://www.sap.com/caf/demo.sap.com/s00_caf_rfc/modeled/zfm_rfc_caf_as" xmlns:b0="http://www.sap.com/caf/demo.sap.com/s00_caf_rfc/modeled/zfm_rfc_caf_as">
      <import namespace="http://www.sap.com/caf/demo.sap.com/s00_caf_rfc/modeled/zfm_rfc_caf_as" location="http://sinhnv-lap:50000/zfm_rfc_caf_as/zfm_rfc_caf_asBeanImpl?wsdl=binding&mode=ws_policy" />
    - <service name="zfm_rfc_caf_as">
    - <port name="zfm_rfc_caf_asBindingPort" binding="b0:zfm_rfc_caf_asBinding">
      <address xmlns="http://schemas.xmlsoap.org/wsdl/soap/" location="http://sinhnv-lap:50000/zfm_rfc_caf_as/zfm_rfc_caf_asBeanImpl" />
      </port>
      </service>
      </definitions>
    Edited by: Sinh Nguyen Van on Jul 22, 2009 4:18 AM

  • After importing images from my card using LR 5.4, the GPS data does not show in the metadata panel. However, when I look at the imported images using Bridge, the GPS data is visible. Anybody know why LR is not seeing the GPS data? Camera is Canon 6D.

    After importing images from my card using LR 5.4, the GPS data does not show in the metadata panel. However, when I look at the imported images using Bridge, the GPS data is visible. Anybody know why LR is not seeing the GPS data? Camera is Canon 6D.

    Ok, the issue seem to be solved. The problem was this:
    The many hundred files (raw and xmp per image) have been downloaded by ftp in no specific order. Means - a couple of files in the download queue - both raw and xmps. Most of the time, the small xmp files have been finished loading first and hence the "last change date" of these xmp files was OLDER than the "last change date" of the raw file - Lightroom then seem to ignore the existence of the xmp file and does not read it during import.(a minute is enough to run into the problem)
    By simply using the ftp client in a way that all large raw files get downloaded first followed by the xmp files, we achieved that all "last changed dates" of the xmp files are NEWER than the related raw files. (at least not older)
    And then LR is reading them and all metadata information has been set / read correctly.
    So this is solved.

  • All pictures in 1 folder after import - problem?

    Hi,
    due to a hardware change (I´m now an iMac Owner ;)) I had to import all my photos into iPhoto.
    Previously I did a copy of all my photos on an external hard drive, unfortunately without any folder structure. In other words all phtos (approx. 9.000 photos) were inside 1 folder on the external hard drive.
    After importing the photos iPhoto automatically put all the files/pictures into the folder ORIGINALS but also without any subfolder. In the meantime I have of course splitted one respective event into 40-50 accordingly inside iPhoto.
    Overall a new photo structure was planned anyway!
    Question:
    Is it a problem of speed or any other issue if iPhoto has to search always through the whole folder when accessing a single photo. I did not get any warning when importing the photos within the iPhoto application. Or is it only a question of folder structure inside iPhoto without any limits related to speed or possible other issues? Or will be originals inside iPhoto never touched?
    Does it make sense to split the original files within the folder ORIGINALS into subfolder? AFAIK a manual change of the iPhoto structure is not recommended, right?
    I hope someone understand my nonsens ..

    AFAIK a manual change of the iPhoto structure is not recommended, right?
    Very right. Doing that will corrupt the Library.
    Don't change anything in the iPhoto Library Folder via the Finder or any other application. iPhoto depends on the structure as well as the contents of this folder. Moving things, renaming things or otherwise making changes will prevent iPhoto from working and could even cause you to damage or lose your photos.
    It makes no difference whatever how iPhoto stores the photos inside the Library Package. You do all your work in the iPhoto Window and there is never a need for you to go into the Library package at all.
    So, organise your photos in the iPhoto Window
    Regards
    TD

  • After importing pictures they became grey with doted white line around

    I have imported pictures (RAW CR2 format) from my camera hundreds of times and they all went well. But this time the only the import worked well (I saw the preview when each picture was imported). Then Iphoto asked if i would like to delete the pictures on the camera after import. I deleted them. But then when I should take a look at the pictures they were grey with doted white lines around all of them. When I chose a picture and check the properties I can see that the pictures use normal space. I tried to drag one picture to a directory in the finder but this didn´t work either. The export doesn't work either. When I scroll fast among the pictures I can see them for a short period but when I stop scrolling they will turn grey with doted white line around. Whats wrong? and how to solve?
    system: iPhoto11 and Lion.

    The ! turns up when iPhoto loses the connection between the thumbnail in the iPhoto Window and the file it represents.
    Option 1
    Back Up and try rebuild the library: hold down the command and option (or alt) keys while launching iPhoto. Use the resulting dialogue to rebuild. Choose to Rebuild iPhoto Library Database from automatic backup.
    If that fails:
    Option 2
    Download iPhoto Library Manager and use its rebuild function. This will create a new library based on data in the albumdata.xml file. Not everything will be brought over - no slideshows, books or calendars, for instance - but it should get all your albums and keywords back.
    Because this process creates an entirely new library and leaves your old one untouched, it is non-destructive, and if you're not happy with the results you can simply return to your old one. .
    Regards
    TD

Maybe you are looking for

  • Macbook to HD TV Audio and Video

    Hello, I am trying to hook my Macbook up to my HDTV to stream Netflix...but I am unsure how to do so. I understand that the mini DVI only outputs video, so I was looking at getting this mini DVI to HDMI cable for video and this mini stereo to RCA cab

  • Question with multi-mapping 1:n split scenario

    Hi     I have a scenario with the following requirement R3 --> XI --> Multiple files I used a multi-mapping scenario  using message mapping. to get the following output <ns0:Messages xmlns:ns0="http://sap.com/xi/XI/SplitAndMerge"> <ns0:Message1> <Tra

  • Problem with access sequence display  in pricing Analysis:

    Hi We have a situation that seems to occur for a specific customer/material combination where there are missing key combinations in the pricing analysis screen.  In some instances, we will have 10 key combination for a condition type, but only the fi

  • FI Master Data list

    Hi, Can anyone provide me with a list of Master Datas required to be prepared in the FI and CO modules? Any link for this kind of information? (Will award points)

  • How to enable ojc compiler in new oracle jdeveloper 11g IDE

    Hi, we can compile java files using ojc compiler in Jdeveloper 10g IDE. I need this option in New Jdeveloper 11g IDE. Can any one tell me how to enable the ojc compiler? ( I need detailed steps to enable ojc compiler) I got class version error while