Merge data creating duplicate listings

I have a csv database with about 580 records. I've created a template with the correct fields in inDesign, loaded the csv file using merge data, then created the merged document. I get 580 listings of one item like the image below. If there is an errant symbol in the csv database causing this, it doesn't reflect in an error in a preview. In fact, all 580 records can be previewed, copied and pasted into another document and displayed error-free. Any suggestions on how to fix this would be appreciated.

Depends on the document. ISSUU - 2015 Kayak Gear Guide by Wild Coast Publishing. This went live unpublished for client proofing requiring two separate sessions, so three reflows of the database. If it worked properly, the clients submit the revised spreadsheets that are dropped into a database. The revised database is then loaded into inDesign. If the update worked as it should, it wouldn't require a single extra piece of work in the inDesign document. It would be: import client spreadsheet into the database, export the database into csv file, update csv into inDesign and update the fields. Five minutes. Now look at that document and you have 60 of 604 records to change and it can't be automated. You're telling me it's no deal reflow that document and build it from scratch again? I don't think so. The problem is this would be SO easy to work properly. Some initial finessing of the headings and bam, done for the year. We could take client updates and reflow it live online in minutes of effort. It should work properly. I paid Adobe to have it work properly. It doesn't work properly. I lost days of productivity trying to overcome the shortfalls and apparently I'm hearing I can't resolve the issue without an expenditure of hundreds potentially thousands of dollars for another system. If that was the case I wouldn't have upgraded to CC. CS4 worked fine and it was paid for. (As an aside, there was an additional error that the links didn't work as they pointed to the spreadsheet rather than the URL. Plus the issuu.com publisher turned all links live, including the database links that pointed to the database, so there were suddenly 10 links or so per record, none of which worked. This version of the document was saved as a PDF without hyperlinks as we decided that no links temporarily was better than 10 that didn't work. So that part remains to be done, another problem to overcome, as the autodetect hyperlinks seems to have a flaw as well when dealing with merged data).  

Similar Messages

  • How Do I get data merge to work without creating duplicate pages in a single page native file?

    I've called Adobe about this and they indicated it was some kind of setting that needed to be changed in InDesign, but wouldn't tell me how to do it unless I paid for an "incident pack" and talked with technical support.
    Problem: I have a single page document in InDesign CS5. It's a direct-mail postcard with large high-res images and complicated graphics on it. I want to merge a list of recipients names/addresses from Excel with this single-page design. Excel document has 2,000 names in it.
    I successfully create the merge using InDesign's data merge feature, but it creates a new InDesign document with 2,000 pages in it. Each page is a duplicate of my original file, but with different addresses on each page. Because of the multiple records and the high res images it has created 2,000 duplicates of - InDesign crashes. It also took about 4 hours to do the merge.
    There has got to be a better way to merge data w/o it created the same page over and over again.
    I can't make a pdf to solve this problem becasue I have to work with an in-house printer and they do a poor job of printing from pdf. So, I always send them native files. This printer does not have variable data software. However, if that is the fast and easy solution to this problem - I will force them to purchase it.
    Any advise or solution to this problem would be appreciated. Is there a setting in InDesign that needs to change so it doesn't repeat the single page design 2,000 times and cause InDesign to crash?
    Thank you.

    That's exactly how Data Merge works. If it's on the page, it gets duplicated. 2000 records is a lot, so I'm not sure if the images or the sheer number is what's slowing things down.
    One thing you can try is to moove everything EXCEPT the merge placeholders to a master page, then assign the [None] master to the page with the placeholder anbd do the merge. Change the master to the one with your static content after the merge. That might speed up the merge, but it won't do anything for the file being 2000 pages that each has to RIP for printing. The best way to deal with this is to find a printer who can do VDP printing and give him the template file and your data file, and let him do the merge in the print stream.

  • Data loader : Import -- creating duplicate records ?

    Hi all,
    does anyone have also encountered the behaviour with Oracle Data Loader that duplicate records are created (also if i set the option: duplicatecheckoption=externalid) When i am checking the "import request queue - view" the request parameters of the job looks fine! ->
    Duplicate Checking Method == External Unique ID
    Action Taken if Duplicate Found == Overwrite Existing Records
    but data loader have created new records where the "External Unique ID" is already existent..
    Very strange is that when i create the import manually (by using Import Wizard) exactly the same import does work correct! Here the duplicate checking method works correct and the record is updated....
    I know the data loader has 2 methods, one for update and the other for import, however i do not expect that the import creates duplicates if the record is already existing, rather doing nothing!
    Anyone else experiencing the same ?? I hope that this is not expected behaviour!! - by the way method - "Update" works fine.
    thanks in advance, Juergen
    Edited by: 791265 on 27.08.2010 07:25
    Edited by: 791265 on 27.08.2010 07:26

    Sorry to hear about your duplicate records, Juergen. Hopefully you performed a small test load first, before a full load, which is a best practice for data import that we recommend in our documentation and courses.
    Sorry also to inform you that this is expected behavior --- Data Loader does not check for duplicates when inserting (aka importing). It only checks for duplicates when updating (aka overwriting). This is extensively documented in the Data Loader User Guide, the Data Loader FAQ, and in the Data Import Options Overview document.
    You should review all documentation on Oracle Data Loader On Demand before using it.
    These resources (and a recommended learning path for Data Loader) can all be found on the Data Import Resources page of the Training and Support Center. At the top right of the CRM On Demand application, click Training and Support, and search for "*data import resources*". This should bring you to the page.
    Pete

  • Merging data without set text framing - creating continuous, flowing/wrapping records

    Hi.
    Has anyone figured out a way to merge data without confining each record to set text frame size, hence creating a continuous flow or wrapping of records?
    I need to put in an image, followed by a text description, so I can't merge all the text into one text box.
    I can fit frame to text, then align with set gap, but this is way to time consuming and ineffective when creating a doc of several 100 pages.

    OysterBoy84 wrote:
    Hi.
    Has anyone figured out a way to merge data without confining each record to set text frame size, hence creating a continuous flow or wrapping of records?
    I need to put in an image, followed by a text description, so I can't merge all the text into one text box.
    I can fit frame to text, then align with set gap, but this is way to time consuming and ineffective when creating a doc of several 100 pages.
    Searching Google for "indesign script combine all stories" without quotes turns up a number of links. There's a link to InDesign Secrets that describes a free script that may solve your problem.
    HTH
    Regards,
    Peter
    Peter Gold
    KnowHow ProServices

  • For our family finances, how do I create a monthly report merging data (by category) from tables for checking, credit card, and savings?

    For our family finances, how do I create a monthly expenditure report that automatically merges data (by category) from checking, credit card, and savings accounts?

    You can set up four tables:
    1) a summary table
    2) a table for pasting Checking transactions
    3) a table for pasting Credit Card transactions
    4) a table for pasting Savings transactions
    You will have catagorize each transaction or using the ones from the bank/credit card
    I used pop-up menus to make the categories (Type) and summed the two accounts in the summary table.

  • Creating Merge Data

    Has anyone found a way to create merge data to be used in wp application by querying a table?

    Depends on the document. ISSUU - 2015 Kayak Gear Guide by Wild Coast Publishing. This went live unpublished for client proofing requiring two separate sessions, so three reflows of the database. If it worked properly, the clients submit the revised spreadsheets that are dropped into a database. The revised database is then loaded into inDesign. If the update worked as it should, it wouldn't require a single extra piece of work in the inDesign document. It would be: import client spreadsheet into the database, export the database into csv file, update csv into inDesign and update the fields. Five minutes. Now look at that document and you have 60 of 604 records to change and it can't be automated. You're telling me it's no deal reflow that document and build it from scratch again? I don't think so. The problem is this would be SO easy to work properly. Some initial finessing of the headings and bam, done for the year. We could take client updates and reflow it live online in minutes of effort. It should work properly. I paid Adobe to have it work properly. It doesn't work properly. I lost days of productivity trying to overcome the shortfalls and apparently I'm hearing I can't resolve the issue without an expenditure of hundreds potentially thousands of dollars for another system. If that was the case I wouldn't have upgraded to CC. CS4 worked fine and it was paid for. (As an aside, there was an additional error that the links didn't work as they pointed to the spreadsheet rather than the URL. Plus the issuu.com publisher turned all links live, including the database links that pointed to the database, so there were suddenly 10 links or so per record, none of which worked. This version of the document was saved as a PDF without hyperlinks as we decided that no links temporarily was better than 10 that didn't work. So that part remains to be done, another problem to overcome, as the autodetect hyperlinks seems to have a flaw as well when dealing with merged data).  

  • Merge data to create multiple personalized videos

    In After Effects, is it possible to create layers that allow for merged data to be imported so as to export multiple personalized videos?

    I envision one 'master' video with text layers that call for data inputs. Similar to a mail merge process, a unique 'personalized' instance of the video would be created from each name in the datalist. If I have 50 names I'd render 50 new videos, each one unique.
    Is this possible?
    Thank you!!

  • Synchronize creates duplicates in data base

    I import photos from the memory card to a folder. Regardless of import from a device or disk, convert to DNG or just copy the files I get duplicates if I try to synchronize the folder. If I remove the last created duplicate import and synchronize the folder, the files are duplicated again. If I remove the first imported photos and then synchronize, no duplicates are created. Don't import suspected duplicates is checked. This makes the synchronize command unusable for me. Is there a way around this?
    Lennart
    LR 1.3.1
    Windows XP Pro
    Pentax istDS

    Yes, all files are duplicated. As far as I have checked this occurs consistently.
    Yes, as I already wrote Don't import suspected duplicates is checked
    The reason I synchronize is exactly according to the help text:
    "When you synchronize folders, you have the option of adding files that have been added to the folder but not imported into the catalog, removing files that have been deleted, and scanning for metadata updates. The photo files in the folder and all subfolders can be synchronized. You can determine which folders, subfolders, and files are imported."
    Lennart

  • Merging data from 2 schemas with same object structure into one schema

    Hi
    I want to merge data from 2 schemas in different environments ( say test1 and test 2) into 1 schema ( say Test_final) for testing. Both these schemas are having same structure, the data can be same or different.
    What I did is that I took an export of schema on Test1 and then import it into Test_final. Now I need to merge/append data from Test2 into Test_final.
    I can not merge the data with import due to primary key constraints and also import doesnt support this feature, so I tried SQL*Loader to "append" the data by using sequence to generate Primary key.
    But my worries are that since new primary keys are generated so foreign keys will become invalidated and the data will not be consistent.
    Is there any other way to do this task..
    Regards
    Raman

    This approach might be better...
    create table test_final
    as
    select *
    from schema1.test1
    insert into test_final
    select t2.* from schema2.test1 ,  test_final tf
    where t2.pk != tf.pk
    /...assuming duplicate primary keys mean duplicate records. If that assumption is not the case then you have a more complex data migration exercise on your hands and you need to figure out some rules to determine which version of the data takes precedence.
    Cheers, APC

  • Can we merge data from multiple sources in Hyperion Interactive Reporting ?

    Hi Experts,
    Can we merge data from multiple sources in Hyperion Interactive Reporting ?Example can we have a report based on DB2
    Oracle,
    Informix and Multidiemnsional Databases like DB2,MSOLAP,ESSBASE?
    Thanks,
    V

    Yes, Each have their own Query and have some common dimension for the Results Sections to be joined together in a final query.
    look in help for Creating Local Joins

  • ITunes Consolidation: Will it create duplicates of media files if they already exist in the new itunes destination folder but arent being referenced by the itunes library?

    I have a macbook pro from fall 2010. I have been using my external HD as my primary itunes folder for the past 7 yrs ( i.e it is organized by itunes default categorization, etc). However, there is some overlap between my external HD itunes folder and my comps original itunes folder (probably 20-30% of my current library's media files were already duplicated there from when i first bought the HD and transferred files to the external, however they are mostly referenced from the external now). Also there are files in the comp's itunes folder that are not on my external (probably due to organizational mishaps and podcast downloads when not connected to the external, etc). I do not know how many files this is the case for.
    Recently, i decided that i wanted to consolidate all of my itunes music into one concise folder on my macbook's HD to avoid these issues going forward.  I also just bought a new external which i will use to back up my comp's itunes folder once i consolidate everything to the comp. But i plan on using the itunes folder on the comp going forward or at least using the consolidate fcn to avoid these issues.
    Here is my dilemma. Prior to learning of the consolidate feature today, I changed the default itunes folder in settings to the original itunes folder on the macbook and then manually dragged all of the media files from my external itunes folder to the macbook's itunes folder. There were a ton of duplicates or overlapping folders with similar artist names, so i opted to have them merge all folders where possible. After doing this, i realized the itunes library still references these files that i dragged and dropped from the external since that is where my default itunes folder used to be. At that point i then read and learned of the consolidate itunes feature which seems like the best way to do this. But i am afraid that if i consolidate my library to my macbook's itunes folder now, will it create duplicates of the 4000 + media files i just dragged to the macbook's itunes folder which are already in the itunes library but being referenced from the old external??
    Question: Is there a way that i can make these files i just dragged to the macbook's itunes folder become the ones referenced by the itunes library through the consolidate feature without creating duplicates of these 4000+ files? I want to avoid having  three copies of these files (two duplicates in the itunes and the originals on the external). Furthermore, is there a best way to find out which files in the macbook's itunes folder are not being included in the library after consolidating everything to the computer's HD? Then after that, what is the best way to start backing this all up to my new external, so that this overlapping issue doesnt happen again?  Im assuming i just need to make sure to consolidate first when changing itunes folders going forward. Thank you. I know that was a lot  to read but hopefully someone can help with this issue.

    In general:
    - The best way to check if files are in a library is to drag the whole media folder to the library.  If something was missed it will be added.  If it is in the library is won't be added a second time.
    - If you have a second copy of a file and iTunes does not currently list that specific file in its database then even if it is identical to one already in the database it will add it a second time.  So if you have file xyz on the internal drive and the external drive and add them to iTunes you will end up with two entries for the same thing in iTunes because you really do have to copies of the files.  While this can happen with two copies on a single drive, it will almost definitely be the case when you have a copy of a file on two drives because iTunes really sees them as two different things.
    -Unless you know how iTunes works, avoid moving files yourself.  Let the consolidate feature do all the file moving.  Manually moving files can really mess up iTunes unless you know what you are doing and have iTunes set to not try to do it itself.
    - There's no clean and easy way to delete duplicates.  The best thing is to change practice so you don't create them.   Here are  references:
    How to find and remove duplicate items in your iTunes library - http://support.apple.com/kb/HT2905
    http://dougscripts.com/itunes/itinfo/dupin.php (commercial)
    Posts by turingtest2 about different types of duplicates and techniques- https://discussions.apple.com/thread/3555601 and https://discussions.apple.com/message/16042406 (Note: The DeDuper script is for Windows)
    http://www.hardcoded.net/dupeguru_me/

  • Error in creating Duplicate Database in same server..

    Hi,
    I am getting following error when creating duplicate database
    DB Version=10.2.0.4
    $ rman target sys/sys@test nocatalog auxiliary /
    Recovery Manager: Release 10.2.0.4.0 - Production on Mon Sep 28 15:13:32 2009
    Copyright (c) 1982, 2007, Oracle. All rights reserved.
    connected to target database: TEST (DBID=1702666620, not open)
    using target database control file instead of recovery catalog
    connected to auxiliary database: CLNTEST (not mounted)
    RMAN> DUPLICATE TARGET DATABASE TO "CLNTEST";
    Starting Duplicate Db at 28-SEP-09
    allocated channel: ORA_AUX_DISK_1
    channel ORA_AUX_DISK_1: sid=156 devtype=DISK
    contents of Memory Script:
    set until scn 597629461;
    set newname for datafile 1 to
    "/u01/data_new/clonetest/system01.dbf";
    set newname for datafile 2 to
    "/u01/data_new/clonetest/undotbs01.dbf";
    set newname for datafile 3 to
    "/u01/data_new/clonetest/sysaux01.dbf";
    set newname for datafile 4 to
    "/u01/data_new/clonetest/users01.dbf";
    set newname for datafile 5 to
    "/u01/data_new/clonetest/example01.dbf";
    set newname for datafile 6 to
    "/u01/data_new/clonetest/undotbs02.dbf";
    set newname for datafile 7 to
    "/u01/data_new/clonetest/alweb1.dbf";
    set newname for datafile 8 to
    "/u01/data_new/clonetest/indx1.dbf";
    restore
    check readonly
    clone database
    executing Memory Script
    executing command: SET until clause
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    executing command: SET NEWNAME
    Starting restore at 28-SEP-09
    using channel ORA_AUX_DISK_1
    RMAN-00571: ===========================================================
    RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
    RMAN-00571: ===========================================================
    RMAN-03002: failure of Duplicate Db command at 09/28/2009 15:13:40
    RMAN-03015: error occurred in stored script Memory Script
    RMAN-06026: some targets not found - aborting restore
    RMAN-06023: no backup or copy of datafile 8 found to restore
    RMAN-06023: no backup or copy of datafile 7 found to restore
    RMAN-06023: no backup or copy of datafile 6 found to restore
    RMAN-06023: no backup or copy of datafile 5 found to restore
    RMAN-06023: no backup or copy of datafile 4 found to restore
    RMAN-06023: no backup or copy of datafile 3 found to restore
    RMAN-06023: no backup or copy of datafile 2 found to restore
    RMAN-06023: no backup or copy of datafile 1 found to restore
    RMAN> crosscheck backupset of datafile 1;
    using target database control file instead of recovery catalog
    allocated channel: ORA_DISK_1
    channel ORA_DISK_1: sid=154 devtype=DISK
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 2;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 3;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 4;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 5;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 6;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    RMAN> crosscheck backupset of datafile 7;
    using channel ORA_DISK_1
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u01/rmanbkp/db_prd0akqcnoh_1_1 recid=10 stamp=698769169
    crosschecked backup piece: found to be 'AVAILABLE'
    backup piece handle=/u02/data/flash_recovery_area/BINGO/backupset/2009_09_28/o1_mf_nnndf_TAG20090928T151044_5d114x0h_.bkp recid=17 stamp=698771444
    Crosschecked 2 objects
    Thanks,

    Hi,
    RMAN> show all;
    using target database control file instead of recovery catalog
    RMAN configuration parameters are:
    CONFIGURE RETENTION POLICY TO REDUNDANCY 1; # default
    CONFIGURE BACKUP OPTIMIZATION OFF; # default
    CONFIGURE DEFAULT DEVICE TYPE TO DISK; # default
    CONFIGURE CONTROLFILE AUTOBACKUP OFF; # default
    CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '%F'; # default
    CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO BACKUPSET; # default
    CONFIGURE DATAFILE BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE ARCHIVELOG BACKUP COPIES FOR DEVICE TYPE DISK TO 1; # default
    CONFIGURE MAXSETSIZE TO UNLIMITED; # default
    CONFIGURE ENCRYPTION FOR DATABASE OFF; # default
    CONFIGURE ENCRYPTION ALGORITHM 'AES128'; # default
    CONFIGURE ARCHIVELOG DELETION POLICY TO NONE; # default
    CONFIGURE SNAPSHOT CONTROLFILE NAME TO '/oracle/ora10g/product/10.2.0/db_1/dbs/snapcf_TEST.f'; # default
    Thanks,

  • What does the Merge Data Stream processor do when there are multiple input streams to it from the same reader?

    Hi,
    I have a process with a reader of master data that outputs 5 records that feeds simultaneously into 3 different lookup and return processors.
    Each lookup and return processor brings some data back from a detail table. There can be multiple details so I follow each lookup processor with a split records from array processor. Hence I end up with 3 'streams' of data. Stream 1 has 8 records, stream 2 has 5 records and stream 3 has 6 records.
    I join all these streams to a Merge Data Streams processor.
    I end up with 9 records so although the help for the Merge Data Streams processor says 'Merge Data Streams does not perform any transformation, matching, or merging of records' there is clearly some merging going on.
    What is the behaviour of the merge data streams processor in this scenario?
    I have added attributes and flags into each of the streams. How many records should I see and what values should the added attributes/flags have (some records show attributes/flags from all 3 streams whereas others show just those attributes/flags from one stream).
    I have developed a test case simply to understand what the processor is doing but it's not obvious and furthermore it's probably unwise to develop EDQ processes where the processor behaviour is not documented and guaranteed to remain consistent. What I am trying to achieve is to bring all of a person's (the master data) various details (assignments, employers, etc.) together so we can check the data (some rules require data from multiple details).
    Thanks, Nik

    Cheers Mike - and for the explanation of the terms.
    I think I understand now how it's supposed to work.
    What I'm finding however is that when I set a flag to Y at the beginning of a path (that includes a lookup and return and then split records from array processor) that flag is showing no (i.e. an empty) value in SOME of the records shown in the subsequent MDS processor (it's fine the very last split processor before we get to the MDS but then again there are fewer records in that split processor than the MDS).
    In my case there are obviously more records in the MDS processor than there were in the original reader (because the lookup and returns are configured to have unlimited maximum matches). As mentioned, the different paths return different numbers of  records before being combined in the MDS. Say a reader has 5 records and path 1 returns 8 records in total including a path-specific flag (flag1, set to Y) but path 2 (that again adds its own path-specific flag (flag2, set to Y) returns just 5 records (since nothing was added from the lookups) are you saying that flag2 would show as 'Y' for all 8 records shown in the MDS?
    Hopefully you would be able to see what I mean if you try to create a process like the one I've described (or I can upload a package).
    Re. the purpose of the separate paths approach it is simply to allow the visualisation ('showing the working' as Neil puts it) of the different checks being carried out by the process.
    This is considered one of the benefits of the tool over writing SQL queries (with outer joins, query criteria, etc.).
    Also, as mentioned I was following an example that Neil put together for us to ensure that we are doing things in a 'proper' and supported way.
    If we put all the lookups, etc. for all the checks into one datastream then it no longer becomes so understandable and the value of joining processors in a process over simply writing SQL becomes questionable; arguably the EDQ process in fact becomes less easy to understand than simply writing SQL.
    Also, to go down this route I will need to revise the (what was previously substantially working until I revised it) processes that I have already developed.
    Thanks, Nik

  • Is there a way to replace tags on import rather than create duplicates??

    I use the tag engine to connect with the KEPWARE OPC server program and at times I want to change how the tags are located inside KEPWARE but keep the same tagnames inside LABVIEW. I export the .scf file and modify it inside EXCEL. When I import it again it will create duplicate tagnames. When you have 4000-8000 tags, it can take awhile to delete the duplicates. It would be great if we could replace the existing tag name data. Maybe I should be creating a vi to modify the properties of individual tags instead. How does anyone else work around this issue??

    I would export all of the tags and then modify the ones that need to be modified. Then, before I import them, I would create a new scf file. This way there are no duplicates.

  • "Merging data" error when syncing iPhone 5s Calendar to MacBook Pro Calendar, using Mountain Lion

    Since a failed attempt to sync data via iCloud (resulting in duplicate entries in Calendar and Contacts, missing data, and other oddities), whenever I try to sync the Calendars via iTunes, I repeatedly get the error message, "iTunes could not sync calendars to the iPhone <name> because an error occurred while merging data." The Contacts sync fine, and the backup runs smoothly. The only discussions I can find in Apple Support are dated 2009 with older systems (I'm using Mountain Lion 10.8.5, iTunes 11.1.5, iOS 7.1.1, but the problem occurred with iOS 7.0 and possibly 6.0 as well). Repairing permissions had no effect. I can't use the option to copy the Mac Calendar to the iPhone because new data are entered into the iPhone. Would appreciate guidance. I'm trying to avoid iCloud but if I have to go there to fix this, guidance for that would also be welcome.

    I have iPhone 5 with iOS 6.1.4 and iMac with Mavericks 10.9.3 and iTunes 11.2.1.
    I also got this error when I tried to sync iPhone with iMac (after a year when Apple disabled local USB cabel sync in Mavericks - thanks Apple for re-enabling it!).
    I tried a number of things to no avail. Finally I tried three things - and hurraaa, can sync now! Unfortunatelly I didn't test sync among these three steps - so don't know which one fixed it - up to you to try. My guess is that it was the 3rd step, actually
    Step 1: On my iPhone I opened Calendar and tap "Calendars" (top left). I have 7 items on the list (2 email accounts, Calendar, Home, Work, Reminders, Birthdays). Most of the calendar entries were in my Home calendar, only a few were in Work or Birthdays. I moved all of them to Home. I also checked whether there aren't any items without content.
    Step 2: On the list of Calendars I had one custom item (Birthday in my local language). This item had only one entry in it. I deleted both the single entry and the custom item as a whole - so I left the default categories only.
    Step 3: At the top of the list of Calendars there is an item "All from My Mac". I tapped this and thus enabled all of my calendar categories. Now the sync worked like a charm!
    I actually noticed that when I synced, iTunes automatically checked all items under "Sync Calendars - Selected Calendars". Even if I manually deselected all except the "Home" calendar, as soon as I started to sync, iTunes checked all items automatically. This led me to thinking that if I check "All from My Mac" item in my iPhone it might work.
    Good luck with your sync issue, hope you'll solve it.

Maybe you are looking for

  • How to go to a particular node in a hierarchical tree?

    I want to do this simple thing with a Forms hierarchical tree. Since tree has lots of levels and branches I want to give a search box. User types the label and press a button. The form then has to query the tree node and expand ONLY the path where th

  • How do I import .mxf files from an external hard drive onto FCP 5.1.4?

    I have an external hard drive and it is full of .mxf files. I am trying to import them but when I click import-folder, the only thing it imports from the Contents is the Icons folder. Next, I try to log and capture. It tells me I can only capture now

  • Set the input cursor i.e. "set cursor" in ABAP

    In ABAP Dynpro, I have the possibility to set the cursor to a specified input field using the "set cursor" command. In ABAP Web Dynpro it seems that I always have to place the cursor first using the mouse. Is there an equivalent to "set cursor" in AB

  • RG1 Register Updation Problem-

    Hello, we are having two plants, both are manufaturing plants. having same excise group and Excise registration number.  Actualyy both the plants are situated at same place. M001 is linked to M001 company code and M002 plant linked to M002 Company co

  • Activities for each phase in Sales Methodology for Opportunities

    Hello Gurus, I want to configure certain activities around each phase for sales methodology in Opportunity. eg phase 1 identification, if there cud be series of aactivities like calling up a customer, setting up appointment, etc  which if configured