Some questions on versioning and synchronizing metadata

Hy all!
I am quite new to warehousing and Oracle Warehouse Builder, and so i would have some questions regarding on some common issues. I would appriciate if you guys would who have experience in this domain to share some good practice knowledge :)
I am using OWB 10.2
So first of all i would like to know if you have some proposal of the way of versioning control and synchronizing projects between team memebers when working on a bigger project, team memebers that don't work on the same repository (cause i saw that OWB has an integrated multiuser support for handeling object locks and user sessions).
I saw that a way of migrating data from one place to a nother is using the import/export options integrated in OWB. This creates mdl files wich are some kind of "dumps" of the metadata informations, but the thing with these mdl files wich i don't think is a good way to synchronize is that first of all the .mdx and .xml files contained in the .mdl (wich is kind of a zip) contains many informations in it (like creation date, some timestamps, etc) wich are always updated when exporting, and if synchronizing these files maybee using CVS, we always will get differences between the files alltough they would contain the same thing, only timestamps changed.
Then a nother issue with this, is that we could have 2 alternatives: dump the whole project, wich is odd to have to synchronize a single file between users, especialy on a big project, then the orher way would be doing for each object from the project (each mapping, each table, etc) an separate .mdl filem then to synchronize each file of each object, wich will be unefficient on reimporting each file in part.
So please if you can share the way you work on a big project with many implementers with OWB, i would really appriciate.
A nother thing i would like to know is: is there a way to generate from an existing project (like one created with OWB) the OMB commands dump (maybee in a tcl script)? Cause i saw that the way the exeprienced users implement warehousing is using TCL with OMB language. I downloaded the example from oracle for warehouse project, and i saw that is entirely made from tcl scripts (so no mdl file involved). And this i think would be nice, to have the OMB commands generated from an existing projects.
I see this OWB projects like a database wich can be built up from only OMB commands and OWB a graphical tool to do this (same as constructing a database only from DDL commands or using SQL developer to do this), this is why i am asking about a way of dumping the OMB commands for creating an OWB project.
Please give me some advices, and correct me if i sad some dumb things :D but i really am new to warehousing and i would really appriciate if you guys with experience could share some informations.
Thank you verry much!
Alex21

Depends. Having everyone working on the same project certainly simplifies things a lot regarding merging and is generally my preference. But I also recognize that some projects are complex enough that people wind up stepping on each other's toes if this is the case. In those cases, though, I try to minimize the issue of merging changes by having common structural objects (code libraries, tables, views, etc) retained in a single, strictly controlled, central project schema and having the developer's personal work areas reference them by synonym, thus being unable to alter them to the detriment of others.
If they want to change a common object then need to drop their synonym and make a local copy which they can alter, and then there is a managed process by which these get merged back into the main project schema.
This way any changes MUST go through a central schema, we can put processes in place to notify all of the team of any impending changes, and can also script updates across the team.
Every hour a script runs automatically that checks for dropped synonyms and notifies the project leader. It especially checks for two developers who have built local copies of the same object and notifies each that they need to coordinate with each other as they are risking a conflict. When a structural change is submitted back to the central shared schema, it is added to a batch that is installed at end of business and a list of those impending changes is circulated to the team along with impact analysis for dependencies. The install script updates the main schema, then also drops the local copy of the object in the developer's schema who made the change and re-establishes the synonym there to get back to status quo for the change monitoring. Finally, it then updates itself in all of the developer areas via OMBPlus. So, each morning the developers return to an updated and synched environment as far as the underlying structure.
This takes care of merging structural issues, and the management of the team should minimize other metadata merging by managing the worklist of who is to be working on a given mapping or process flow at a given time. Anyone found to be doing extraneous changes to a mapping or process flow when it is not in their job queue without getting pre-approval will be spoken to VERY firmly as this is counter to policy. And yes, OWB objects such as mappings are then also coordinated to the central project via import/export. OMBplus scripts also propogate these changes daily across the team as well.
Yep, there is a whole lot of scripting involved to get set up.... but it saves a ton of time merging things and solvinv conflicts down the road.
Cheers,
Mike

Similar Messages

  • Some questions on whitespace and &

    Dear all,
    I have some questions on whitespace and & that need you kind help:
    1. Except /n /t /r and space, is there any characters that are whiteSpace characters?
    2. When parsing XML document, when the ignorableWhitespace() method is called? Will it be called from characters() ?
    e.g. Where the white space after <a> is called, and where is the white space between test1 and test 2 indied <b> is called?
    <a>
       <b>test1 test2</b>
    </a>  3. When the & should be escaped in a well-formed (or validated) XML file by replacing &?
    It should be excaped in any element content, such as <b>test1 & test2< /b> - except for <![CDATA[ section?
    It should NOT always be used in attributes? e.g. <a b="test1 & test2"/>  - has to do escaped by entity & - am I right?
    Thanks!
    Thanks
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                           

    Thanks DrClap.
    >
    2. When parsing XML document, when the
    ignorableWhitespace() method is called? Will it be
    called from characters() ?It will be called when the document has a DTD and the
    parser is a validating one that is actually
    validating, when the parser sees whitespace that is
    not part of a DTD element. It will not be called from
    characters() because that is a method that you write.I wrote a contentHandler and using Xerces parser to parse an XML document. When I set validating feature to false, I found that for:
    <a>
       <b>test1 test2</b></a>The white space between <a> and <b> (including newline) is actually called from characters(), not in ignorableWhitespace(). So does this mean that if validating - white space is always called from ignorableWhitespace(), and if not validating, if it is always called from characters()?
    >
    3. When the & should be escaped in a well-formed(or
    validated) XML file by replacing &? The XML Recommendation says "The ampersand character
    (&) and the left angle bracket (<) may appear
    in their literal form only when used as markup
    delimiters, or within a comment, a processing
    instruction, or a CDATA section. If they are needed
    elsewhere, they must be escaped using either numeric
    character references or the strings "&" and
    "<" respectively."So in this sense, both & in <b>test1 & test2< /b> and <a b="test1 & test2"/> is illegal unless replacing them by & - right? What does the markup delimiters mena in above XML Recommendation ? Example?
    Thanks!

  • Some question on pa and om

    hi
    some question
    if employee work in us and then gone to uk
      how u track
      it
      whcin infotype
    and what is dotted reporting
      in simple mantice
    and how to assign diff  number range in org mangemnt for job, postion and org unit like 200-300,100-200,400-500.
    from
    rajani

    Hi Rajani,
    To address dual reporting relationship ,OM has A/088 relationship,please check tcode -OOOT and select object type S-Positions and essential relationship.
    OONR is tcode for number ranges -the first two letter in the subgroup denotes the plan version and the last two letter is the object type
    for ex -01S-would denote plan version 01 and object type S-postion and whether the numbers are going to be asisgned internally by the system or externally by the user.
    01$$ would denotes its application for all object types,should the $ be used .
    select the subgroup and click on number range maintainance and you will be able to see the numbers assigned or edit them by clicking change interval button and you can assign the from and to number accordingly.The numbers gets assigned when the particular object is created.
    ex-you assign the subgroup as $$S-which means its for all plan version and object type S
    when you create an object type using PP01 for ex the numbers gets assigned for that object created.
    Hope this helps.
    Regards
    KG
    Edited by: SAPenjoy:) on Aug 19, 2010 11:30 AM

  • LR2 Question syncing keywords and/or Metadata.

    I am shooting RAW and import the files into my catalog as DNG.
    After adding keywords and changing metadata to all the DNG files I would like to apply those changes back to my original RAW files.
    Can I do that easily? Maybe adding XMP files to the same directories as the original RAW files?
    Once I have the RAW files tagged with the same keywords and metadata I plan to archive them and delete from my disk.
    Thank you,

    For your first question - You can select the folder containing the files then use the Save Metadata command from Folder panel context menu (see attached screen shot). Alternatively, you can select the actual files then use Save Metadata to file command (Ctrl/Cmd+S). This latter method is much slower.
    For you second question - not using your current workflow. That said, if you changed your workflow slightly it's possible. To do it you ned to import your raw files without converting to DNG. Next, you apply keywords, metadata, etc. then use method described above for saving back to file (this creates the XMP sidecars). Now select all of the images in Grid view and convert the raw files to DNG (Convert to DNG from Library menu).
    The final step would be to archive the raws plus XMP sidecars, this will need to be done outside of Lightroom.

  • Some questions on RMAN and others

    hi,
    I have some doubts and need some clarifications to clear my doubts......thanks in advance
    can data be copied from one db say A to another db say B, if A is running on Windows 32bit OS and db B is on Solaris 64bit
    can I have a primary db on 10.2.0.4 and physical standby for this db on 11g ??
    I know RMAN can exclude tablespace but can we exclude tables like in dataguard %table_name% ....I know we can't just wanted to confirm
    Can I restore one specific tablespace from PROD to test ????
    I have out of date TEST db and have added additional datafiles and PROD, how can I update TEST without recreating the entire db

    can data be copied from one db say A to another db say B, if A is running on Windows 32bit OS and db B is on Solaris 64bit
    Yes you can do it either through transportable tablespace or using exp/imp (expdp/impdp in 10g)
    can I have a primary db on 10.2.0.4 and physical standby for this db on 11g ??
    No. this is not possible
    I know RMAN can exclude tablespace but can we exclude tables like in dataguard %table_name% ....I know we can't just wanted to confirm
    didn't understand the question. please elabrote.
    Can I restore one specific tablespace from PROD to test ????
    You cannot restore but you can move using transportable tablespace feature.
    I have out of date TEST db and have added additional datafiles and PROD, how can I update TEST without recreating the entire db
    you can add those datafiles in TEST.

  • Funny how some questions get answered and others just viewed

    Any one care to comment on this?

    It's a forum, not a support site.
    There's no guarantee that any question will be answered - it depends who bothers to read them and if they happen to know the answer.

  • Problems with map in LR5. Some images keep "locking" and all metadata apears as mixed .

    Hello
    I have recently started using Lightroom 5 and I try to place all of my photos in the map module. It worked perfectly fine for a month.
    1. The first problem is that the first two pictures in every folder are constantly "selected", So when I double click on the map to unselect all images, the thumbnails row at the bottom keep jumping to the start.
    2. The second problem is that sometimes the coordinations data (GPS) apears as <mixed>, altough I have selected only one picture. Sometimes I would stack a bunch of pictures on the same spot, and some of them individualy will apear as <mixed>, while others will show the correct details.
    3. The exact same problem happens with the Caption information. In pictures that have the GPS data <mixed>, will have the Caption data <mixed> as well. I am assuming this will be the case with every type of information I insert.
    Thank you very much for helping me, this frustrates me a lot!

    Dave
    It may have to do with the settings in Mail.
    When running Mail, go to preferences -> Accounts -> Mailbox Behaviors.
    You should now have the list of your email accounts to the left and to the right a list showing Drafts, Notes, Sent, Junk and Trash.
    If you do not need or want anything to be stored on your ISPs/mail account server, make sure the choices for "Store ..... messages on the server" are not checked. This option relates to Sent, Junk and Trash.
    If checked, uncheck them and you should then see the extra folders disappear from Mail. But be aware - by choosing not to store any messages on the server, your only backup will be on your computer.
    Hope this is of any help.
    All the best,
    Espen

  • Some questions on SmartDDE and Windows XP

    I have three Brooks Emerson mass flow controllers, but I can't control them
    using some Excel macros and the SmartDDE because Windows XP doesn't
    recognize the controllers by the COM port.
    How can I solve this problem?
    I also need the help file that comes with SmartDDE (when I installed the
    program no help file was created). Where can I find it?
    Where can I find the document regarding the HART protocol-based commands,
    Brooks part no. 541-C-053-AAA?
    Thanks in advance.

    Hi SmartDDE,
    I am unfamiliar with the instruments you are attempting to communicate with, but I found a good post about it here:
    http://forums.ni.com/ni/board/message?board.id=170​&message.id=76346&requireLogin=False
    Hope this helps!
    Charlie S.
    Visit ni.com/gettingstarted for step-by-step help in setting up your system

  • Somes questions about WFQ and Per-Hop Behavior

    Hi !
    I'm currently self studding for my CCNP ONT certification Exam.
    I read the official certification guide for this exam from Cisco Press. But, some point are still confuse for me :
    1- Why WRED is most deploy on cores devices then all devices in Enterprise Network ? (The author saied :"Because WRED is usually applied to network core router....") And which implementation of congestion management system I should deploy on none core device, because thoses can also experience congestion in some point of the day and tail drop should be avoid also on thoses (distribution and access devices) devices also if I understand correctly ?
    2- I understand WRED is an evolution of RED. Is still exist some reason to choose to use RED over WRED ? If WRED is available in device, is the RED is also available ? In my book in the example it say to activate WRED the command is RANDOM-DETECT, what about to activate RED ?
    3- I'm confused about "class selector Per-Hop Behavior", if I'm understand correctly, We have 8 sub-classes "class selector Per-Hop Behavior" for each : Default Per-Hop Behavior, Assured Forwarding 1 to 4, and Expedited Forwarding, where the lowest value have higher probability to be drop. The class "class selector Per-Hop Behavior" work in the inverse of there parent where the highest value have higher probability to be drop.
    in the exemple (copied from the book) how the preference of dropping is selected :
    random-detect dscp-based
    random-detect dscp af21 32 40 10
    random-detect dscp af22 28 40 10
    random-detect dscp af23 24 40 10
    random-detect dscp cs2 22 40 10
    I'm correctly understand how I should interpret the "random-detect dscp af" in comparaison of each other but, I don't understand where the I should place the "random-detect dscp cs2" with regard to the others.
    Thanks a lot in advanced for your help !
    Excuse my Engliash, but I'm a French people !

    Hi,
    The main difference betwwen PQ and LLQ is this.
    PQ: Exclusively starves other traffic while serving packets in its queue. Infact PQ is so greedy, other traffic may never get serviced as long as there are packets in the PQ.
    LLQ: It combines the bandwidth reservation of CBWFQ with PQ.
    LLQ uses Policing to ensure that Bandwidth configured for PQ is not exceeded and when exceeded, drops the packets and then service the non-LLQ queues.
    Hence LLQ ensures guranteed badnwidth only to the contracted amount.
    Eg if a LLQ has 30kbps of bandwidht reserved. This can take a single G.729 call. If a 2nd call comes in, the policer will discard packets and both calls will sound bad. Hence its important to use CAC to limit the number of calls allowed to that configured for LLQ bandwidth, so the policer does not drop packets.
    LLQ traffic uses the EF Diffserv PHB.
    The EF PHB has 2 components.
    1. Queueing to provide low delay, jitter or loss plus a guranteed bandwidth.
    2. Policing to prevent EF traffic/LLQ traffic from starving other traffic of bandwidth.
    So in summary, PQ will always treat PQ traffic firts and will starve other traffic of bandwidth
    LLQ on the other hand will serve LLQ queues to the tune of bandwidth configured for them and then service other queues too.
    HTH, Please rate useful posts

  • New Mac Pro - some questions about setup and RAM

    After debating between iMac and Mac Pro, I decided on a refurb SINGLE 2.8 quad-core Mac Pro.
    Somewhere in these discussions (can't find it now) I thought I saw recommendations to zero out the hard drive and re-install the system when it arrives. Is this necessary?
    Also, I plan to use the single processor savings to buy RAM. When I go to the Crucial site, they have a choice of RAM for 4-core or 8-core. Mine is only 4-core, but would I choose the 8-core option to get the correct RAM for 2008 models?
    In general, when it comes to buying and installing RAM, should I follow the 8-core directions? Will having a single processor change anything to do with upgrades or expansion?
    Thanks. Hope I didn't ask too much in one post.

    Somewhere in these discussions (can't find it now) I thought I saw recommendations to zero out the hard drive and re-install the system when it arrives. Is this necessary?
    No, that really isn't necessary. If you were prepping a new bare drive that had never been used or you were installing OS X on a drive that had once been used for Windows, then zeroing the drive is quite appropriate.
    All the Mac Pros use the same type of RAM. The Late 2008 models use the faster PC2-6400 800 MHz RAM, but the other specs are the same - ECC, Fully buffered, heat sink. The RAM and installation is the same regardless of cores.
    RAM should be installed minimally in matched pairs, optimally in matched quads. The following illustrations show how it should be installed:
    Mac Pro memory arrangement photos
    Mac Pro Memory Configuration

  • [newbie question] some question about MBean and notification

    Hi,
    can i manage an existing java object with OpenMBean? if yes how to connect them ( for ModeMBean it's setManagedResource but for OpenMBean)?
    How can i send a notification from a generated ModelMBean ?(should i change my java object to impement NotificationBroadcasterSupport ?)

    Hi,
    An OpenMBean is just an MBean that has a ModelMBeanInfo, and uses only simple types
    as defined by javax.management.openmbean.
    You can use OpenMBeans to manage your resource, but if you don't have MXBeans
    then you will need to write your OpenMBean by hand. This is not very difficult but
    is a bit tedious.
    There should be a chapter on OpenMBeans in the JMX tutorial - have a look at it.
    http://blogs.sun.com/jmxetc/entry/looking_for_jmx_overview_examples
    To send a notification with a RequiredModelMBean just use its sendNotification() method.
    http://java.sun.com/j2se/1.5.0/docs/api/javax/management/modelmbean/ModelMBeanNotificationBroadcaster.html#sendNotification(javax.management.Notification)
    With a "generated ModelMBean" I don't know - I assume it depends on how and by what
    it was generated and whether it is a subclass of javax.management.modelmbean.RequiredModelMBean.
    You will need to look at the generated code to figure it out.
    Hope this helps,
    -- daniel
    JMX, SNMP, Java, etc...
    http://blogs.sun.com/jmxetc

  • Some question about flex and facebook, and saving server work!

    Hi guys im starting to develop facebook application and im not surre what is the better way to connect my app to facebook.
    first option is to pull all the data from facebook through PHP and then send it to the app. or getting the data directly to my app through actionscript.
    what is the better option?  the first option takes more server work, is that something to consider or its irrelevant to small-medium apps?
    another thing , does PHP integrets better whit facebok than actionscript? does the APIs of PHP and ActionScript are the same??
    what would u say is better??
    10x ahead, and sry if this is not the right forum!

    imm there is a tuttorial
    mediacreative.businesscatalyst.com/AS3_ Creating Facebook Application.pdf

  • LR 4.4 (and 5.0?) catalog: a problem and some questions

    Introductory Remark
    After several years of reluctance this March I changed to LR due to its retouching capabilities. Unfortunately – beyond enjoying some really nice features of LR – I keep struggling with several problems, many of which have been covered in this forum. In this thread I describe a problem with a particular LR 4.4 catalog and put some general questions.
    A few days ago I upgraded to 5.0. Unfortunately it turned out to produce even slower ’speed’ than 4.4 (discussed – among other places – here: http://forums.adobe.com/message/5454410#5454410), so I rather fell back to the latter, instead of testing the behavior of the 5.0 catalog. Anyway, as far as I understand this upgrade does not include significant new catalog functions, so my problem and questions below may be valid for 5.0, too. Nevertheless, the incompatibility of the new and previous catalogs suggests rewriting of the catalog-related parts of the code. I do not know the resulting potential improvements and/or new bugs in 5.0.
    For your information, my PC (running under Windows 7) has a 64-bit Intel Core i7-3770K processor, 16GB RAM, 240 GB SSD, as well as fast and large-capacity HDDs. My monitor has a resolution of 1920x1200.
    1. Problem with the catalog
    To tell you the truth, I do not understand the potential necessity for using the “File / Optimize Catalog” function. In my view LR should keep the catalog optimized without manual intervention.
    Nevertheless, when being faced with the ill-famed slowness of LR, I run this module. In addition, I always switch on the “Catalog Settings / General / Back up catalog” function. The actually set frequency of backing up depends on the circumstances – e.g. the number of RAW (in my case: NEF) files, the size of the catalog file (*.lrcat), and the space available on my SSD. In case of need I delete the oldest backup file to make space for the new one.
    Recently I processed 1500 photos, occupying 21 GB. The "Catalog Settings / Metadata / Automatically write changes into XMP" function was switched on. Unfortunately I had to fiddle with the images quite a lot, so after processing roughly half of them the catalog file reached the size of 24 GB. Until this stage there had been no sign of any failure – catalog optimizations had run smoothly and backups had been created regularly, as scheduled.
    Once, however, towards the end of generating the next backup, LR sent an error message saying that it had not been able to create the backup file, due to lack of enough space on the SSD. I myself found still 40 GB of empty space, so I re-launched the backup process. The result was the same, but this time I saw a mysterious new (journal?) file with a size of 40 GB… When my third attempt also failed, I had to decide what to do.
    Since I needed at least the XMP files with the results of my retouching operations, I simply wanted to save these side-cars into the directory of my original input NEF files on a HDD. Before making this step, I intended to check whether all modifications and adjustments had been stored in the XMP files.
    Unfortunately I was not aware of the realistic size of side-cars, associated with a certain volume of usage of the Spot Removal, Grad Filter, and Adjustment Brush functions. But as the time of the last modification of the XMP files (belonging to the recently retouched pictures) seemed perfect, I believed that all my actions had been saved. Although the "Automatically write changes into XMP" seemed to be working, in order to be on the safe side I selected all photos and ran the “Metadata / Save Metadata to File” function of the Library module. After this I copied the XMP files, deleted the corrupted catalog, created a new catalog, and imported the same NEF files together with the side-cars.
    When checking the photos, I was shocked: Only the first few hundred XMP files retained all my modifications. Roughly 3 weeks of work was completely lost… From that time on I regularly check the XMP files.
    Question 1: Have you collected any similar experience?
    2. The catalog-related part of my workflow
    Unless I miss an important piece of knowledge, LR catalogs store many data that I do not need in the long run. Having the history of recent retouching activities is useful for me only for a short while, so archiving every little step for a long time with a huge amount of accumulated data would be impossible (and useless) on my SSD. In terms of processing what count for me are the resulting XMP files, so in the long run I keep only them and get rid of the catalog.
    Out of the 240 GB of my SSD 110 GB is available for LR. Whenever I have new photos to retouch, I make the following steps:
    create a ‘temporary’ catalog on my SSD
    import the new pictures from my HDD into this temporary catalog
    select all imported pictures in the temporary catalog
    use the “File / Export as Catalog” function in order to copy the original NEF files onto the SSD and make them used by the ‘real’ (not temporary) new catalog
    use the “File / Open Catalog” function to re-launch LR with the new catalog
    switch on the "Automatically write changes into XMP" function of the new catalog
    delete the ‘temporary’ catalog to save space on the SSD
    retouch the pictures (while keeping and eye on due creation and development of the XMP files)
    generate the required output (TIF OR JPG) files
    copy the XMP and the output files into the original directory of the input NEF files on the HDD
    copy the whole catalog for interim archiving onto the HDD
    delete the catalog from the SSD
    upon making sure that the XMP files are all fine, delete the archived catalog from the HDD, too
    Question 2: If we put aside the issue of keeping the catalog for other purposes then saving each and every retouching steps (which I address below), is there any simpler workflow to produce only the XMP files and save space on the SSD? For example, is it possible to create a new catalog on the SSD with copying the input NEF files into its directory and re-launching LR ‘automatically’, in one step?
    Question 3: If this I not the case, is there any third-party application that would ease the execution of the relevant parts of this workflow before and/or after the actual retouching of the pictures?
    Question 4: Is it possible to set general parameters for new catalogs? In my experience most settings of the new catalogs (at least the ones that are important for me) are copied from the recently used catalog, except the use of the "Catalog Settings / Metadata / Automatically write changes into XMP" function. This means that I always have to go there to switch it on… Not even a question is raised by LR whether I want to change anything in comparison with the settings of the recently used catalog…
    3. Catalog functions missing from my workflow
    Unfortunately the above described abandoning of catalogs has at least two serious drawbacks:
    I miss the classification features (rating, keywords, collections, etc.) Anyway, these functions would be really meaningful for me only if covering all my existing photos that would require going back to 41k images to classify them. In addition, keeping all the pictures in one catalog would result in an extremely large catalog file, almost surely guaranteeing regular failures. Beyond, due to the speed problem tolerable conditions could be established only by keeping the original NEF files on the SSD, which is out of the question. Generating several ‘partial’ catalogs could somewhat circumvent this trap, but it would require presorting the photos (e.g. by capture time or subject) and by doing this I would lose the essence of having a single catalog, covering all my photos.
    Question 5: Is it the right assumption that storing only some parts (e.g. the classification-related data) of catalog files is impossible? My understanding is that either I keep the whole catalog file (with the outdated historical data of all my ‘ancient’ actions) or abandon it.
    Question 6: If such ‘cherry-picking’ is facilitated after all: Can you suggest any pragmatic description of the potential (competing) ways of categorizing images efficiently, comparing them along the pros and contras?
    I also lose the virtual copies. Anyway, I am confused regarding the actual storage of the retouching-related data of virtual copies. In some websites one can find relatively old posts, stating that the XMP file contains all information about modifying/adjusting both the original photo and its virtual copy/copies. However, when fiddling with a virtual copy I cannot see any change in the size of the associated XMP file. In addition, when I copy the original NEF file and its XMP file, rename them, and import these derivative files, only the retouched original image comes up – I cannot see any virtual copy. This suggests that the XMP file does not contain information on the virtual copy/copies…
    For this reason whenever multiple versions seem to be reasonable, I create renamed version(s) of the same NEF+XMP files, import them, and make some changes in their settings. I know, this is far not a sophisticated solution…
    Question 7: Where and how the settings of virtual copies are stored?
    Question 8: Is it possible to generate separate XMP files for both the originally retouched image and its virtual copy/copies and to make them recognized by LR when importing them into a new catalog?

    A part of my problems may be caused by selecting LR for a challenging private project, where image retouching activities result in bigger than average volume of adjustment data. Consequently, the catalog file becomes huge and vulnerable.
    While I understand that something has gone wrong for you, causing Lightroom to be slow and unstable, I think you are combining many unrelated ideas into a single concept, and winding up with a mistaken idea. Just because you project is challenging does not mean Lightroom is unsuitable. A bigger than average volume of adjustment data will make the catalog larger (I don't know about "huge"), but I doubt bigger by itself will make the catalog "vulnerable".
    The causes of instability and crashes may have NOTHING to do with catalog size. Of course, the cause MAY have everything to do with catalog size. I just don't think you are coming to the right conclusion, as in my experience size of catalog and stability issues are unrelated.
    2. I may be wrong, but in my experience the size of the RAW file may significantly blow up the amount of retouching-related data.
    Your experience is your experience, and my experience is different. I want to state clearly that you can have pretty big RAW files that have different content and not require significant amounts of retouching. It's not the size of the RAW that determines the amount of touchup, it is the content and the eye of the user. Furthermore, item 2 was related to image size, and now you have changed the meaning of number 2 from image size to the amount of retouching required. So, what is your point? Lots of retouching blows up the amount of retouching data that needs to be stored? Yeah, I agree.
    When creating the catalog for the 1500 NEF files (21 GB), the starting size of the catalog file was around 1 GB. This must have included all classification-related information (the meaningful part of which was practically nothing, since I had not used rating, classification, or collections). By the time of the crash half of the files had been processed, so the actual retouching-related data (that should have been converted properly into the XMP files) might be only around 500 MB. Consequently, probably 22.5 GB out of the 24 GB of the catalog file contained historical information
    I don't know exactly what you do to touch up your photos, I can't imagine how you come up with the size should be around 500MB. But again, to you this problem is entirely caused by the size of the catalog, and I don't think it is. Now, having said that, some of your problem with slowness may indeed be related to the amount of touch-up that you are doing. Lightroom is known to slow down if you do lots of spot removal and lots of brushing, and then you may be better off doing this type of touch-up in Photoshop. Again, just to be 100% clear, the problem is not "size of catalog", the problem is you are doing so many adjustments on a single photo. You could have a catalog that is just as large, (i.e. that has lots more photos with few adjustments) and I would expect it to run a lot faster than what you are experiencing.
    So to sum up, you seem to be implying that slowness and catalog instability are the same issue, and I don't buy it. You seem to be implying that slowness and instability are both caused by the size of the catalog, and I don't buy that either.
    Re-reading your original post, you are putting the backups on the SSD, the same disk as the working catalog? This is a very poor practice, you need to put your backups on a different physical disk. That alone might help your space issues on the SSD.

  • Some questions about the integration between BIEE and EBS

    Hi, dear,
    I'm a new bie of BIEE. In these days, have a look about BIEE architecture and the BIEE components. In the next project, there are some work about BIEE development based on EBS application. I have some questions about the integration :
    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?
    could anyone give some guide for me? I'm very appreciated if you can also give any other information.
    Thanks in advance.

    1) generally, is the BIEE database and application server decentralized with EBS database and application? Both BIEE 10g and 11g version can be integrated with EBS R12?You, shud consider OBI Application here which uses OBIEE as a reporting tool with different pre-built modules. Both 10g & 11g comes with different versions of BI apps which supports sources like Siebel CRM, EBS, Peoplesoft, JD Edwards etc..
    2) In BIEE administrator tool, the first step is to create physical tables. if the source appliation is EBS, is it still needed to create the physical tables?Its independent of any soure. This is OBIEE modeling to create RPD with all the layers. If you build it from scratch then you will require to create all the layers else if BI Apps is used then you will get pre-built RPD along with other pre-built components.
    3) if the physical tables creation is needed, how to complete the data transfer from the EBS source tables to BIEE physical tables? which ETL tool is prefer for most developers? warehouse builder or Oracle Data Integration?BI apps comes with pre-built ETL mapping to use with the tools majorly with Informatica. Only BI Apps 7.9.5.2 comes with ODI but oracle has plans to have only ODI for any further releases.
    4) During data transfer phase, if there are many many large volume data needs to transfer, how to keep the completeness? for example, it needs to transfer 1 million rows from source database to BIEE physical tables, when 50%is completed, the users try to open the BIEE report, can they see the new 50% data on the reports? is there some transaction control in ETL phase?User will still see old data because its good to turn on Cache and purge it after every load.
    Refer..http://www.oracle.com/us/solutions/ent-performance-bi/bi-applications-066544.html
    and many more docs on google
    Hope this helps

  • Question about MII and xMII UDS versions

    Hi all,
    just some quick questions on MII and MII UDS:
    Has the final version 12.1 of MII already been released? I can't find it in my service marketplace (does it depend on my account configuration?)
    What is the last MII UDS version? The 4.0 SP2? Is xMII UDS supported on Windows 2003 server 64 bit or only on 32bit?
    Can I connect MII via UDC connector to an UDS different from xMII UDS?
    Your help is appreciated...
    Mauro

    >
    > * Has the final version 12.1 of MII already been released? I can't find it in my service marketplace (does it depend on my account configuration?)
    Yes, it would depend upon your SMP credentials and licensed software purchases (if you have 12.0 then you're eligible for 12.1 too).  But it is not GA yet, so only people in ramp-up currently have access to it.  Don't both asking for a date because it has not been set.  Some time this quarter hopefully
    >
    > * What is the last MII UDS version? The 4.0 SP2? Is xMII UDS supported on Windows 2003 server 64 bit or only on 32bit?
    The 2.5 UDS's are still viable and necessary for some of the vendor API based connections, but 4.0 is available for OleDB, OPC-DA and OPC-HDA.  They have only been validated on 32 bit, but in theory they should work in a 32 bit emulation mode on a Windows 64 bit environment.  Supported vs. unsupported is a thin line as a result, but perhaps others using them could share any specific experiences using them on 64 bit.
    >
    > * Can I connect MII via UDC connector to an UDS different from xMII UDS?
    Your MII version of 11.5, 12.0, or 12.1 will work the same way with the UDS's as they are not version specific to MII.
    Regards,
    Jeremy

Maybe you are looking for