Large campaigns - Most efficient way to handle - Not using EMOD

We're using CRMOD but not EMOD for our marketing campaigns. We are not using CRM for B2B but rather for B2C.
We have direct mail campaigns that we send to 200,000 of our contacts, eNewsletters send to about 800,000 contacts (using Interspire). We have not been doing this using CRM before.
I would like to know the best approach to:
1) Pull out these records from CRM (identify the contacts according to marketing targets) - the limit on the analytics would only allow me to extract that information if I enter criteria to segment my contacts in smaller chunks, e.g. by State / Province. The Segmentation Wizard could be used with smaller chunks less than 50,000. Any other methods available?
I used the Wizard for a test and got a segment with few records for which I updated CRM with Campaign Recipient record with status 'Sent'.
2) Sometimes when we hire a company to do our direct mail campaign, our addresses are being verified for change of address. The company we hire then sends back our file with updated addresses. How can we import just the change of address for let's say 35,000 records? In Admin / Data Import, importing addresses does not seem to be available
3) Import responses into CRM (I did some testing by entering campaign recipients and set the status to 'Message Opened' & Delivery Status to 'Opened' (both using Wizard segment update and also using the user interface) but when I try to pull the #s in Analytics (Campaign Response history) I see the following metric being populated:
- # Recipients
but not:
- # of responders
- # of Responses
- # of open responses
- Avg days to respond
My questions here are:
- How on earth do prep my data so that these fields populate? Are they strictly reserved for EMOD?
- What's the best approach to load 200,000 reponses back into CRM?
- Is such large volume for campaign recipients and responses going to impact our performance? If so, what approach would be recommended to improve that?
4) Are the # opt ins, opt outs, global opt ins and global opt outs only available for EMOD or can they be used if another marketing email system is used? If we can use, how can we best load into CRM for large numbers?
5) Documentation on Metrics is hard to find... Anyone can tell me what fields are involved in the Campaign Responses metrics and the criteria that would impact the metrics?
Thanks. I know that it is lots of questions but we are new at integrating our marketing campaign efforts into CRM and find that with large numbers, unless one knows the best approach to handle them, one can also impact the system performance greatly.
Nathalie

Afternoon,
1) I would create reports that have filters that segment into smaller chunks and extract them into csv format. Once you have this information into the contacts that you want to use then you can use the bulk data load process.
2) This information is available for reporting please make sure you you have given access to this information within the profile as well as the layout type is not read only.
3) this information is viewable within my reporting (Campaign Response History)
My questions here are:
- How on earth do prep my data so that these fields populate? Are they strictly reserved for EMOD?
-----Make the system auto create these values on creation.
- What's the best approach to load 200,000 responses back into CRM?
-----Web services
- Is such large volume for campaign recipients and responses going to impact our performance? If so, what approach would be recommended to improve that?
-----No this should have no impact at all.....
4) You would have to import the data from the other mail program to the CRM and EMOD, you could perform this through Data loads back into the CRM.
5)I believe i have read some of this information though help somewhere i would suggest looking around on help for EMOD.

Similar Messages

  • Most efficient way to handle Image sizes with respect to Mobile Screen Size

    Hi all,
    I and trying to find the best possible way to manage Images with respect to the size of mobile screen.
    If I have an image that is best fit on mobile screen size (176 x 144) then how can I make it best fit for (128 x 128) screen size ?
    rizzz86

    Hey Rizzz86,
    You could also scale down a higher resolution image to fit smaller screens, however you will find that resizing by factors other than 2, 4, 8 is harder to achieve with a decent quality and much slower on lower spec devices.
    You could also display 128x128 image centered on 176 x 144 screen with black borders around it.
    It will look not that bad (i.e. borders of 24 pixels top and bottom and 8 pixels left and right) and will cost you nothing in terms of space or processing.
    In the end having an image in 3 different sizes will not increase the MIDlet size by much, unless you are having tens of such images (for example custom UI elements).
    Best solution is to have a bit of both I think. Personally I use scaling down by factors that are powers of 2 and displaying with the black borders.
    Daniel

  • Most efficient way to handle Strings?

    I've heard that Strings are immutable, so you should use StringBuffers. For example, lets say you have an array of Strings called testArray...
    String matchMe = "";
    for (int i = 0 ; i < testArray.length ; i++)
        matchMe = resultOfSomeExpression;
        if (testArray.equals(matchMe))
    doSomeOperation;
    ...matchMe = resultOfSomeExpression will produce a new String object every time you go through the loop. So I've heard you're supposed to make matchMe a StringBuffer and use StringBuffer.replace() to reset it. That seems really cumbersome to me. I've been using StringBuffers as much as possible, but it requires a lot more code that just using Strings. What do other people do? What's the best procedure?
    Ethan

    The best procedure is what works best for you - but, what I suggest is that you use a String object when you don't plan on modifying it, and that you use a StringBuffer object if you're going to modify a sequence of characters or will construct strings dynamically. The rationale behind this is that behind the scene the compiler uses StringBuffer to handle concantenation of strings.

  • The most efficient way to search a large String

    Hi All,
    2 Quick Questions
    QUESTION 1:
    I have about 50 String keywords -- I would like to use to search a big String object (between 300-3000 characters)
    Is the most efficient way to search it for my keywords like this ?
    if(myBigString.indexOf("string1")!=1 || myBigString.indexOf("string2")!=1 || myBigString.indexOf("string1")!=1 and so on for 50 strings.)
    System.out.println("it was found");
    QUESTION 2:
    Can someone help me out with a regular expression search of phone number in the format NNN-NNN-NNNN
    I would like it to return all instances of that pattern found on the page .
    I have done regular expressions, in javascript in vbscript but I have never done regular expressions in java.
    Thanks

    Answer 2:
    If you have the option of using Java 1.4, have a look at the new regular expressions library... whose package name I forget :-/ There have been articles published on it, both at JavaWorld and IBM's developerWorks.
    If you can't use Java 1.4, have a look at the jakarta regular expression projects, of which I think there are two (ORO and Perl-like, off the top of my head)
    http://jakarta.apache.org/
    Answer 1:
    If you have n search terms, and are searching through a string of length l (the haystack, as in looking for a needle in a haystack), then searching for each term in turn will take time O(n*l). In particular, it will take longer the more terms you add (in a linear fashion, assuming the haystack stays the same length)
    If this is sufficient, then do it! The simplest solution is (almost) always the easiest to maintain.
    An alternative is to create a finite state machine that defines the search terms (Or multiple parallel finite state machines would probably be easier). You can then loop over the haystack string a single time to find every search term at once. Such an algorithm will take O(n*k) time to construct the finite state information (given an average search term length of k), and then O(l) for the search. For a large number of search terms, or a very large search string, this method will be faster than the naive method.
    One example of a state-search for strings is the Boyer-Moore algorithm.
    http://www-igm.univ-mlv.fr/~lecroq/string/tunedbm.html
    Regards, and have fun,
    -Troy

  • What is the most efficient way of passing large amounts of data through several subVIs?

    I am acquiring data at a rate of once every 30mS. This data is sorted into clusters with relevant information being grouped together. These clusters are then added to a queue. I have a cluster of queue references to keep track of all the queues. I pass this cluster around to the various sub VIs where I dequeue the data. Is this the most efficient way of moving the data around? I could also use "Obtain Queue" and the queue name to create the reference whenever I need it.
    Or would it be more efficient to create one large cluster which I pass around? Then I can use unbundle by index to pick off the values I need. This large cluster can have all the values individually or it co
    uld be composed of the previously mentioned clusters (ie. a large cluster of clusters).

    > I am acquiring data at a rate of once every 30mS. This data is sorted
    > into clusters with relevant information being grouped together. These
    > clusters are then added to a queue. I have a cluster of queue
    > references to keep track of all the queues. I pass this cluster
    > around to the various sub VIs where I dequeue the data. Is this the
    > most efficient way of moving the data around? I could also use
    > "Obtain Queue" and the queue name to create the reference whenever I
    > need it.
    > Or would it be more efficient to create one large cluster which I pass
    > around? Then I can use unbundle by index to pick off the values I
    > need. This large cluster can have all the values individually or it
    > could be composed of the previously mentioned clusters (i
    e. a large
    > cluster of clusters).
    It sounds pretty good the way you have it. In general, you want to sort
    these into groups that make sense to you. Then if there is a
    performance problem, you can arrange them so that it is a bit better for
    the computer, but lets face it, our performance counts too. Anyway,
    this generally means a smallish number of groups with a reasonable
    number of references or objects in them. If you need to group them into
    one to pass somewhere, bundle the clusters together and unbundle them on
    the other side to minimize the connectors needed. Since the references
    are four bytes, you don't need to worry about the performance of moving
    these around anyway.
    Greg McKaskle

  • Most Efficient Way to Populate My Column?

    I have several very large tables, some of them are partitioned tables.
    I want to populate every row of one column in each of these tables with the same value.
    1.] What's the most efficient way to do this given that I cannot make the table unavailable to the users during this operation?
    I mean, if I were to simply do:
    update <table> set <column>=<value>;
    then I think I'll lock every row for writing by others until the commit. I figured there might be another way that makes better sense in my case.
    2.] Are there any optimizer hints I might be able to take advantage of here? I don't use hints much but with such a long running operation I'll take any help I can get.
    Thank you

    1. May be a better solution exists.
    Since you do not want to lock the table...
    Save the ROWID's of all the rows in that table in a temporary table.
    Write a routine which will loop through this temporary table and use the ROWID to update the main table and issue commit at regular intervals.
    However, this does not take into account the rows that would be added to main table after the temporary table has been created.
    2. Not that I am aware of.

  • Most efficient way to get document names?

    I was wondering what is the most efficient way to get the document names in a container? Use the built in 'name' index somehow, or is there an 'efficient' XPath/XQuery?
    We've been using the XPath /* which is fine with small instances, but causes a java heap out of member error on large XML instances i.e. /* gets everything which is not ideal when all we want are document names.
    Thx in advance,
    Ant

    Hi Antony,
    Here is an example for retrieving the document names on c++:
    void doQuery(XmlContainer &container,
    XmlQueryContext &context,
    const std::string &XPath)
    XmlResults results(container.queryWithXPath(0, XPath, &context));
    // Iterate through the result set as is normal
    XmlDocument theDocument;
    while(results.next(theDocument))
    std::cout << "Found document named: "
    << theDocument.getName() << std::endl;
    Regards,
    Bogdan Coman

  • Most efficient way to do multiple crops on many images?

    I have a large number of images shot in the default 4:3 aspect ratio. I need to print almost 200 as 4x6, and and undetermined but certainly large number as 8x10, so I have a lot of cropping to do. What would seasoned Aperture users suggest as the most efficient way to do this? I've thought of two possibilities:
    1. Duplicate every image I need to print in both sizes and crop one for each size print. This is the best option I've thought of, but it would certainly eat a lot of drive space.
    2. Do all the 8x10 crops, revert to original, and do the 4x6 crops. Saves disk space, but leave me with only the 4x6 crop in Aperture. (Sounds like I want to have my cake and eat it too, I suppose.)
    Anyway, there are a lot of you out there who have logged a lot more Aperture hours than I have. Is there a better workflow I have not considered?
    Thanks,
    Ben

    Hello Ben,
    the beauty of Aperture is, that you can have many versions of an image without needing much extra disk space.
    To have three versions of an image cropped to different aspect ratios, don't create duplicate master images but use (from the Aperture main menu)  "Photos -> Duplicate version" or "New Version from Master". Then crop this new version to a different aspect ratio. Aperture will not really render a new image file but just store the cropping rectangle to be able to create the cropped image when you export or print it.
    So you can have an original version, a 8x10 version, a 4x6 version in your library without needing much extra space - that is one of the rare occasions when you can have your cake and eat it too
    Regards
    Léonie

  • [11g] most efficient way to calculate size of xmltype type column

    I need to check the current size of some xmltype column in a BIU trigger.
    I don't think it's good to use
      length(:new.xml_data.GetStringVal());
    or
      dbms_lob.GetLength(:new.xml_data.GetClobVal());
    What's the most efficient way to get the storage size?
    I don't need the string serialized size.
    It could also be the internal storage size (incl. administration data overhead).
    - thanks!
    regards,
    Frank

    > May I ask for what reason you need to know it?
    I need to handle very large XML document output, which currently hits the internal xmltype limitation of 4GByte, when aggregating XML document fragments for this.
    > You'll get a relevant answer if you give us relevant information :
    > - exact db version
    SELECT * FROM PRODUCT_COMPONENT_VERSION;
    product
    version
    status
    1
    NLSRTL
    11.2.0.3.0
    Production
    2
    Oracle Database 11g Enterprise Edition
    11.2.0.3.0
    64bit Production
    3
    PL/SQL
    11.2.0.3.0
    Production
    4
    TNS for Linux:
    11.2.0.3.0
    Production
    > - DDL of your table
    > XML stored as XMLType datatype can use different storage models, depending on the version.
    I don't use dedicated storage clause.
    But i am hitting the problem already for aggregation into some xmltype variable in PL/SQL
    - BEFORE writing back to a result table.
    Can i avoid such problems, when writing to a table DIRECTLY w/o intermediate xmltype PL/SQL variable
    - depending on the storage clause?
    The reason why asking how to get the size of some xmltype (in table column and/or in PL/SQL variable) is, that i am thinking of a threshold detection.
    In case threshold is reached: outsource XML fragment so far to some separate CLOB storage, and insert a smaller meta-information reference representing that in the output document.
    Finally leave up to the client system to use <xs:include> (or alike) to construct the complete document.
    rgds,
    Frank

  • What is the most efficient way to compare two Lists?

    List A{itemId,itemName} [1,xyz] [9,iyk] [4,iuo] .......
    List B{itemId,item price} [2,999] [9,888] [1, 444].......
    Assume A will be a much larger list than B
    I am trying to find all the items with same itemiId. what would be the most efficient way to do that?
    Thanks!

    Tinkerbell. wrote:
    BigDaddyLoveHandles wrote:
    You wrote:
    Can we assume that an itemId only occurs once in each list? You're the one making claims and assumptions, not me.No in #4 I asked the OP to verify an assumption.An assumption that couldn't possibly be true. Why are you wasting our time?

  • Most efficient way to delete "removed" photos from hard disk?

    Hello everyone! Glad to have this great community to come to for help. I searched for this question but came up with no hits. If it's already been discussed, I apologize and would love to be directed to the link.
    My wife and I have been using LR for a long time. We're currently on version 4. Unfortunately, she's not as tech-savvy or meticulous as I am, and she has been unknowingly "Removing" photos from the LR catalogues when she really meant to delete them from the hard disk. That means we have hundreds of unwanted raw photo files floating around in our computer and no way to pick them out from the ones we want! As a very organized and space-conscious person, I can't stand the thought. So my question is, what is the most efficient way to permanently delete these unwanted photos from the hard disk
    I did fine one suggestion that said to synchronize the parent folder with their respective catalogues, select all the photos in "Previous Import," and delete those, since they will be all of the photos that were previously removed from the catalogue.
    This is a great suggestion, but it probably wouldn't work for all of my catalogues since my file structure is organized by date (the default setting for LR). So, two catalogues will share the same "parent folder" in the sense that they both have photos from May 2013, but if I synchronize May 2013 with one, then it will get all the duds PLUS the photos that belong in the other catalogue.
    Does anyone have any suggestions? I know there's probably not an easy fix, and I'm willing to put in some time. I just want to know if there is a solution and make sure I'm working as efficiently as possible.
    Thank you!
    Kenneth

    I have to agree with the comment about multiple catalogs referring to images that are mixed in together... and the added difficulty that may have brought here.
    My suggestions (assuming you are prepared to combine the current catalogs into one)
    in each catalog, put a distinctive keyword onto all the images so that you can later discriminate these images as to which particular catalog they were formerly in (just in case this is useful information later)
    as John suggests, use File / "Import from Catalog" to bring all LR images together into one catalog.
    then in order to separate out the image files that ARE imported to LR, from those which either never were / have been removed, I would duplicate just the imported ones, to an entirely separate and dedicated disk location. This may require the temporary use of an external drive, with enough space for everything.
    to do this, highlight all the images in the whole catalog, then use File / "Export as Catalog" selecting the option "include negatives". Provide a filename and location for the catalog inside your chosen new saving location. All the image files that are imported to the catalog will be selectively copied into this same location alongside the new catalog. The same relative arrangement of subfolders will be created there, for them all to live inside, as is seen currently. But image files that do not feature in LR currently, will be left behind by this operation.
    your new catalog is now functional, referring to the copied image files. Making sure you have a full backup first, you can start deleting image files from the original location, that you believe to be unwanted. You can do this safe in the knowledge that anything LR is actively relying on, has already been duplicated elsewhere. So you can be quite aggressive at this, only watching out for image files that are required for other purposes (than as master data for Lightroom) - e.g., the exported JPG files you may have made.
    IMO it is a good idea to practice a full separation of image files used in your LR image library, from all other image files. This separation means you know where it is safe to manage images freely using the OS, vs where (what I think of as the LR-managed storage area) you need to bear LR's requirements constantly in mind. Better for discrete backup, too.
    In due course, as required, the copied image files plus catalog can be moved bodily to another drive (for example, if they have been temporarily put on an external drive, and you want to store them on your main internal one again). This then just requires a single re-browsing of their parent folder's location, in order to correct LR's records inside this catalog, as to the image files' changed addresses.
    If you don't want to combine the catalogs into one, a similar set of operations as above, can be carried out for each separate catalog you have now. This will create a separate folder structure in each case, containing just those duplicated image files. Once this has been done for all catalogs, you can start to clean up the present image files location. IMO this is very much the laborious and inflexible option, so far as future management of the total body of images is concerned... though there may still be some overriding reason for working that way.
    RP

  • Most efficient way to open a new TextEdit doc at a specific place

    Suppose I've navigated in the Finder to some deep dark location in the folder hierarchy...   I want a new text file titled "notes" in this folder, i.e. to this path.
    Assuming TextEdit is already open, what is the most efficient way of creating and begin adding text to a new TextEdit file in that location?
    Tried this:  Maybe the usual Save dialog is aware of the current folder, so I can choose "New Document" in TextEdit's Dock icon pulldown, and then choose that path when I do File --> Save, choose the Save dialog's expanded view, and pull down the Where: selection.  Nope.  The current path is not there.
    Tried this:  Keeping an empty TextEdit document named untitled on my desktop.  Drag-copying that to the current folder.  Rename the file to "notes", open it, and start editing  That works, except it is clumsy.
    Is there a better way?
    Please forgive me if I'm missing something incredibly obvious.

    The problem with that is a new file has nothing in it.
    If you save a dummy file on the desktop someplace or use a real file in each of your locations, you could right click and duplicate it, then doubleclick to open it in the program of choice.
    Another option would to be to create a AppleScript that took the current open windows pathname and create and save a Textfile there.
    You save the app in the Dock and only have to click on it once, it automatically quits when mission accomplished.

  • Most efficient way to load XML file data into tables

    I have a complex XML file running into MBs. I want to load it's data into 7-8 tables.
    Which way will be better:
    1) Use SQL Loader to actually load directly into the 7-8 tables directly by modifying the control card.
    Is this really possible and feasible? I am not even sure about it
    2) Load data as XML Type in a table and register it. Then extract from there to load into various tables.
    Please help. I have to find the most efficient way of doing it.
    Regards,
    Sudhir

    Yes it is possible to use SQL*Loader to parse and load XML, but that is not what it was designed for and so is not recommended. You also don't need to register a schema, just to load/store/parse XML in the DB either.
    So where does that leave you?
    Some options
    {thread:id=410714} (see page 2)
    {thread:id=1090681}
    {thread:id=1070213}
    Those talk some about storage options and reading in XML from disk and parsing XML. They should also give you options to consider. Without knowing more about your requirements for the effort, it is difficult to give specific advice. Maybe your 7-8 tables don't exist and so using Object Relational Storage for the XML would be the best solution as you can query/update tables that Oracle creates based off the schema associated to the XML. Maybe an External Table definition works better for reading the XML into the system because this process will happen just once. Maybe using WebDAV makes more sense for loading XML to be parsed (I don't have much experience with this, just know it is possible from what I've read on the forums). Also, your version makes a difference as you have different options available depending upon the version of Oracle.
    Hope all that helps as a starter.
    Edited by: A_Non on Jul 8, 2010 4:31 PM
    A great example, see the answers by mdrake in {thread:id=1096784}

  • Most efficient way to do some string manipulation

    Greetings,
    I need to cleanse some data in a string by replacing unsafe characters with encoded equivalents. (FYI, this is for the purpose of transforming "unsafe" characters into encoded values as data inside an XML document).
    The following code accomplishes the task:
    Note that a string "currentValue" contains the data to be cleansed.
    A string, "encodedValue" contains the result.
      for (counter = 0; counter < currentValue.length(); counter++)
        addChar = (currentValue.substring(counter,counter+1));
        if (addChar.equals("<"))
          addChar = "#60;";
        if (addChar.equals(">"))
          addChar = "#62;";
        if (addChar.equals("="))
          addChar = "#61;";
        if (addChar.equals("\""))
          addChar = "#34;";
        if (addChar.equals("'"))
          addChar = "#39;";
        if (addChar.equals("'"))
          addChar = "#39;";
        if (addChar.equals("/"))
          addChar = "#47;";
        if (addChar.equals("\\"))
          addChar = "#92;";
        encodedValue += addChar;
      } // forI'm sure there is a way to make this more efficient. I'm not exactly "new" to java, but I am learning on my own with no formal training and often take a "brute force" approach with my initial effort.
    What would be the most efficient way to re-do the above?
    TIA,
    --Paul Galvin
    Integrated Systems & Services Group

    im a c++ programmer so im not totally up on these java classes either but...from a c++ stand point you might want to consider using the if else statment.
    by using if else, you only test the character until you find the actual "violating" character and skip the rest of the tests.
    also, you might trying using something to check for alphaNumeric cases first and use the continue keyword when you find one. since more of your characters are probably safe than unsafe you can skip all the ifs/if else statement and only do one test on the good characters. (i just looked for a way to test that and i didnt find one. c++ has a function that does that by checking the ascii number range. dont think that works in java. but maybe you can find one, it would reduce the number of tests probably.)
    happy hunting,
    txjump :)

  • Most efficient way to place images

    I am composing a Catalog with a lot of images along with the text.  The source images are often not square (perfectly vertical, portrait).  I also want to add a thin line frame around each one in InDesign to spurce up the look.  I'm spending a lot of time in Photoshop straightening images, because rotating in Indesign to get the image straight results in a non-straight frame.
    Should I create a small library of frames that I place, then place non-straight images in them (and how do I do that) and rotate in InDesign?  Etc?
    What would be the most efficient way to do this?
    Thanks

    To tag onto what Peter said, when you click on the image with the Direct Selection tool you can also use the up and down arrow in the rotation Dialog (where you enter the angle, at the top) to easily change the rotation.
    Also, when you place images in InDesign you can select a number of images at once and continually click the document (or image frame) and place all the images you selected to import. To clarify, you can have a whole bunch of empty image frames on the page then go to file > place and select all your images, then continually click and place them inside each empty frame.

Maybe you are looking for

  • How to add a link to an interactive svg using edge commons library

    Hi, I've been playing around with this wonderful Edge Commons interactive SVG tutorial My question is : How to add an external link "url" to an interactive svg using edge commons library ? Edge commons example file : http://edgedocks.com/market/inter

  • Dynamic mapping in BizTalk orchestration using BRE

    Hi, I want to do a transformation dynamically based on the schema type (document strong name). In one of the thread, I saw a solution for a similar problem, but in which an additional field was needed to be added to the schema to hold the map name. I

  • Master Detail questions.

    Couple of questions around master/detail screen. I have all the necessary EO/VO/Links etc set up. The master is a simple messageComponentLayout with a primary key. The detail is an advanced table with a primary key. The links are from primary to fore

  • 9i Application Server and Enterprise Edition

    I recently installed Oracle9i Application Server (1.0.2.0.1) on our Sun Solaris (8) system. All is good except for the fact that our legacy application cannot "talk" to our new 9i database because of the LONG vs CLOB datatype issues. We are in the pr

  • A question about switching between streams

    Hi, I am using a set of streams and readers in the following format to read from an ascii / binary mixed file. Once i read past all the ascii I break out of the loop and then use the data input stream to read som four byte floats from the binary data