Tethered shooting for large amount of people

Here's a little bit of background to my question. I have mainly used Lightroom for organizing imports from my Canon 6D and 7D along with developing them. The shoot I'm looking for help on, with these questions, is quite different from what I'm accustomed to. I am looking at taking portraits of a large number of people at a school fundraiser for a club where the lighting conditions are the same and the camera is on a tripod shooting with a tethered capture to a laptop. So if anyone could help with the following  questions it would be greatly appreciated, thanks.
1. Is it possible in tethered capture to have the metadata panel open at the same time and be able to have someone type down their email address for each individual picture?
2. Following question one, is it possible to email all the photos from the shoot individually to the recipients listed in the metadata?
3. Can Lightroom automatically print photos after taken from a tethered shoot?
4. Can Lightroom save specific tags in the metadata from a collection into a text file where they are listed along with the file name?
A bit of help with any of the questions would be great, haven't attempted anything like this before and frankly haven't found anything that could help in the documentation for Lightroom. I do apologize for any confusing wording, my computer wasn't accessible from where I am so this was typed on a iPhone.

The tethered capture floating tool panel is not modal - you can still use Lightroom's regular panels and tools while it's open, so working on a file while a session is active is no problem (provided a new shot doesn't arrive and auto-advance is turned on).
Lightroom cannot directly email images to an address taken from the IPTC metadata, for that you would need either a plugin or an external application. I don't know of one currently available.
Again, exporting metadata to a text file is a job for a plugin - but this time there are several available (such as http://www.photographers-toolbox.com/products/lrtransporter.php )

Similar Messages

  • Tethered Shooting for Sony - Fixed in Aperture 3?

    Can anyone advise if Aperture 3 has support for tethered shooting with Sony cameras that was missing in Aperture 2?
    The existing support article is for Aperture 2 "The following cameras have been qualified for tethered shooting in Aperture 2:" (http://support.apple.com/kb/HT1085) and makes no mention of Aperture 3 support.
    Thanks
    Paul

    The tethered capture floating tool panel is not modal - you can still use Lightroom's regular panels and tools while it's open, so working on a file while a session is active is no problem (provided a new shot doesn't arrive and auto-advance is turned on).
    Lightroom cannot directly email images to an address taken from the IPTC metadata, for that you would need either a plugin or an external application. I don't know of one currently available.
    Again, exporting metadata to a text file is a job for a plugin - but this time there are several available (such as http://www.photographers-toolbox.com/products/lrtransporter.php )

  • Support for tethered shooting for Canon EOS Digital Rebel T1i

    I just read online that Apple Aperture 3.1.2 won't tether to my Canon EOS Digital Rebel T1i under OS X 10.6.7.  Is this true?  If so, is there a workaround or some software I can install?
    Thanks,
    Joe

    BTW - in case anyone is wondering, I've tested this... Canon T1i does work with tethering. (this camera is natively supported by Aperture for purposes of tethering)
    For any cases where there is a camera which is not supported directly, there is also a work-around which should work for almost any camera.
    Go to the Aperture plug-ins page:  http://www.apple.com/aperture/resources/plugins.html
    Look for a plug-in called "Aperture Hot Folders" (direct link is here:  http://www.apple.com/downloads/macosx/automator/aperturehotfolder.html?cmp )
    The plug-in monitors a folder (of your choosing).  Any image which shows up in that folder will automatically be imported into an Aperture project (it's on-the-fly).  Essentially you'd use the Canon EOS Utility to do the tethering, which requires that you pick a target folder where it'll save all pictures you shoot during the tethered session.  You then tell Aperture Hot Folders to monitor that same folder and import any images found into an Aperture project. It gives you the ability to create tethering support for any camera that can do "tethering" even if Aperture doesn't directly support the specific camera (by using the manufacturer's own tethering utility.)

  • Tethered shooting for canon FINALLY here...but

    has anyone found a way to shoot to mac AND card? on a 5DII....

    BTW - in case anyone is wondering, I've tested this... Canon T1i does work with tethering. (this camera is natively supported by Aperture for purposes of tethering)
    For any cases where there is a camera which is not supported directly, there is also a work-around which should work for almost any camera.
    Go to the Aperture plug-ins page:  http://www.apple.com/aperture/resources/plugins.html
    Look for a plug-in called "Aperture Hot Folders" (direct link is here:  http://www.apple.com/downloads/macosx/automator/aperturehotfolder.html?cmp )
    The plug-in monitors a folder (of your choosing).  Any image which shows up in that folder will automatically be imported into an Aperture project (it's on-the-fly).  Essentially you'd use the Canon EOS Utility to do the tethering, which requires that you pick a target folder where it'll save all pictures you shoot during the tethered session.  You then tell Aperture Hot Folders to monitor that same folder and import any images found into an Aperture project. It gives you the ability to create tethering support for any camera that can do "tethering" even if Aperture doesn't directly support the specific camera (by using the manufacturer's own tethering utility.)

  • Query For Large Amount of Data

    Hello All,
    I apologize in advance if I am not posting this in the right section. I am fairly new to APEX and database designing. My goal is to create an inquiry screen for a database of people.
    I am running APEX 4.2 on 11g. The information is store in 3 tables; Names, Demographics, Address. Each table had a PIN ID column that ties them all together. Each table has almost a million rows in them.
    Currently I have it set up that the person types in the name they want to search and it gets passed into a hidden page item on the next page where there is a report with a select statement based on the page item. Everything works right now however it is slow. I am having a 5-10 second delay before the results come up.
    My question is, is there a better way to set up these tables. What is the best way to make this faster?
    I'm sorry if this is a vague question but any help, or point in the right direction will be greatly appreciated
    Thank You !

    976533 wrote:
    Hello All,Welcome to the forum: please read the FAQ and forum sticky threads (if you haven't done so already), and update your forum profile with a real handle instead of "976533".
    When you have a problem you'll get a faster, more effective response by including as much relevant information as possible upfront. This should include:
    <li>Full APEX version
    <li>Full DB/version/edition/host OS
    <li>Web server architecture (EPG, OHS or APEX listener/host OS)
    <li>Browser(s) and version(s) used
    <li>Theme
    <li>Template(s)
    <li>Region/item type(s) (making particular distinction as to whether a "report" is a standard report, an interactive report, or in fact an "updateable report" (i.e. a tabular form)
    With APEX we're also fortunate to have a great resource in apex.oracle.com where we can reproduce and share problems. Reproducing things there is the best way to troubleshoot most issues, especially those relating to layout and visual formatting. If you expect a detailed answer then it's appropriate for you to take on a significant part of the effort by getting as far as possible with an example of the problem on apex.oracle.com before asking for assistance with specific issues, which we can then see at first hand.
    I apologize in advance if I am not posting this in the right section. I am fairly new to APEX and database designing. My goal is to create an inquiry screen for a database of people.It might be more appropriate to the {forum:id=75} forum, so you should look at the following entries on their FAQ as well:
    <li>{message:id=9360002}
    <li>{message:id=9360003}
    I am running APEX 4.2 on 11g. The information is store in 3 tables; Names, Demographics, Address. Each table had a PIN ID column that ties them all together. Each table has almost a million rows in them.
    Currently I have it set up that the person types in the name they want to search and it gets passed into a hidden page item on the next page where there is a report with a select statement based on the page item. Everything works right now however it is slow. I am having a 5-10 second delay before the results come up.
    My question is, is there a better way to set up these tables. What is the best way to make this faster? Are there suitable indexes on the tables?
    Does the report query use them?
    As described above, either: reproduce the problem on apex.oracle.com; or post DDL to allow us to recreate the tables and indexes, and the SQL from your report.

  • SharePoint Library for Large Amounts of Engineering Data

    We are currently using traditional project directory folders for large projects with sometimes tens of thousands of documents. 
    We are planning on migrating the data to SharePoint and the path forward in unclear.
    Initially it was recommended to use a library, not numerous folders, to contain the data so that searching of data in improved. 
    That sounded great.  The 1<sup>st</sup> project used to pilot this for other project is divided into 20 different modification packages. 
    A library category was created for MODS with selectable options of the 20 mod package names and “No Defined” (default value). 
    Some data items are shared between more than one MOD so this category can have more than one assignment.
    When we looked at the directory structure in place we found no consistency in folder names, no consistency in directory structure. 
    Many folders have 5 or 6 (or more) levels of subdirectories. 
    Ideally we want no more than 4 or 5 categories of meta data to define all data. 
    Mapping from chaos into a comparatively small number of categories is daunting.
    When searching this forum I find that libraries should be limited to 2,000 items. 
    There are tens of thousands of items in our pilot project. 
    Surely someone somewhere has encountered this organizational problem. 
    I could use some advice from someone who have been there before.

    John,
    The limit of 2000 is not a hard limit, the actual no of items you can store in a list is 30,000,000. however more item would have impact on performance on rendering and lock on the SQL table.
    Also the limit that you have mentioned (2000) is list view threshold limit and  actually it is 5000.
    One important aspect is Boundaries are hard limit, which you cannot exceed and Supported limits are limits based on tests, which can be exceeded but may cause issues.
    Being said that , I would suggest you to check out this link on
    SharePoint Server 2010 capacity management: Software boundaries and limits
    http://technet.microsoft.com/en-us/library/cc262787(v=office.14).aspx
    and explore other ways of optimizing your list
    here are some references that would help you to optimize -
    http://office.microsoft.com/en-us/sharepoint-foundation-help/manage-lists-and-libraries-with-many-items-HA010377496.aspx
    http://technet.microsoft.com/en-us/library/cc262813(v=office.14).aspx
    http://office.microsoft.com/en-us/sharepoint-server-help/sharepoint-lists-v-techniques-for-managing-large-lists-RZ101874361.aspx
    Hope this helps!
    Ram - SharePoint Architect
    Blog - http://www.SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • What java collection for large amount of data and user customizable record

    I'm trying to write an application which operates on large amount of data. I want user could customize data structure (record) from different types of variables(float,int,bool,string,enums). These records should be stored in some kind of Array. Size of record: 1-200 variables; size of Array of those records: about 100000 items (one record every second through whole day). I want these data stored in some embedded database (sqlite, hsqldb) - access using simple JDBC. Could you give me some advise how to design thoses data strucures. Sincerely yours :)
    Ok, maybe I give some example. This will be some C++ code.
    I made an interface:
    class ParamI {
    virtual string toString() = 0;
    virtual void addValue( ParamI * ) = 0;
    virtual void setValue( ParamI * ) = 0;
    virtual BYTE getType() = 0;
    Than I made some template class derived from interface ParamI:
    template <class T>
    class CParam : CParamI {
    public:
         void setValue( T val );
         T getValue();
         string toString();
         void setValue( ParamI *src ) {
              if ( itemType == src->getType() ) {
                   CParam<T> ptr = (CParam<T>)src;
                   value = ptr->value;
    private:
         BYTE itemType;
         T value;
    sample constructor of <int> template:
    template<> CParam<int>::CParam() {
         itemType = ParamType::INTEGER;
    This solution makes me possible to write collection of CParamI:
    std::vector<CParamI*> myCollection;
    CParam<int> *pi = new CParam<int>();
    pi->setValue(10);
    myCollection.push_back((CParamI*)pi);
    Is this correct solution?. My main problem is to get data from the collection. I have to check its data type using getType() method of CParamI interface.
    Please could give me some advise, some idea to make it right using java.

    If you have the requirement that you have to be able to configure on the fly, then what I've done in the past is just put everything into data pairs into a list: something along the line of: (<Vector>, <String>), where the Vector would store your data and String would contain a data type. I would then make a checker to validate the input according to the SQL databypes that I want to support on the project. It's not a big deal with the amount of data you are talking about.
    The problem you're going to have is when you try to allow dynamic definition, on the fly, of data being input to a table that has already been defined. Your DB will not support that, unless you just store that data pair--which I do not suggest.

  • XML-Export Error for large amount of data

    Hi there...
    I have an application process, which runs on demand (Button) and which exports data (from sql query) into a file (.xls).
    The result is being formated and the export works fine as long as the query returns just a small amount of data, approx. 8 to 10 rows.
    As the result is being stored in a clob, I output the data with "htp.prn" in a loop by "cutting" the clob into small pieces (varchar).
    However, as soon as the amount is bigger than the mentioned 8 to 10 rows, I get an error (sqlerrm:ORA-06502: PL/SQL: numeric or value error).
    I guess there must be something wrong with my loop or the way I "cut" the clob into pieces and output them.
    Maybe someone has a hint for me where to look at exactly!?
    Thanks in advance...
    Johnny
    Here is my code (I removed parts of it, which are not important for this issue):
    declare
    l_xml_header varchar2(32767);
    l_xml_body clob;
    l_xml_text varchar2(32767);
    l_xml_footer varchar2(32767);
    runner number;
    clob_size number;
    begin
    runner := 2;
    owa_util.mime_header( 'application/octet', FALSE);
    htp.p('Content-Disposition: attachment; filename="Test.xls"');
    owa_util.http_header_close;
    l_xml_header := '<?xml version="1.0" encoding="utf-8"?>'||chr(10)||
    '<?mso-application progid="Excel.Sheet"?>'||chr(10)||
    '<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:o="urn:schemas-microsoft-com:office:office"'||chr(10)||
    'xmlns:x="urn:schemas-microsoft-com:office:excel"'||chr(10)||
    'xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:html="http://www.w3.org/TR/REC-html40">'||chr(10)||
    '<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">'||chr(10)||
    '<Version>1.0</Version>'||chr(10)||
    '</DocumentProperties>'||chr(10)||
    '<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">'||chr(10)||
    '<WindowHeight>8580</WindowHeight>'||chr(10)||
    '<WindowWidth>15180</WindowWidth>'||chr(10)||
    '<WindowTopX>120</WindowTopX>'||chr(10)||
    '<WindowTopY>45</WindowTopY>'||chr(10)||
    '<ProtectStructure>False</ProtectStructure>'||chr(10)||
    '<ProtectWindows>False</ProtectWindows>'||chr(10)||
    '</ExcelWorkbook>'||chr(10)||
    '<Styles>'||chr(10)||
    '<Style ss:ID="Default" ss:Name="Normal">'||chr(10)||
    '<Alignment ss:Vertical="Bottom"/>'||chr(10)||
    '<Borders/>'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss"/>'||chr(10)||
    '<Interior/>'||chr(10)||
    '<NumberFormat/>'||chr(10)||
    '<Protection/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s22">'||chr(10)||
    '<Font x:Family="Swiss" ss:Bold="1"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s67">'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss" ss:Color="#FFFFFF"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s157">'||chr(10)||
    '<Borders/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s158">'||chr(10)||
    '<Borders>'||chr(10)||
    '<Border ss:Position="Right" ss:LineStyle="Continuous" ss:Weight="1"/>'||chr(10)||
    '</Borders>'||chr(10)||
    '</Style>'||chr(10)||
    '</Styles>';
    for z in 1..1
    loop
    l_xml_body:=l_xml_body||'<Worksheet ss:Name="Worksheet1"> <Table x:FullColumns="1" x:FullRows="1" ss:DefaultColumnWidth="60">';
    l_xml_body:=l_xml_body||'<Row><Cell ss:StyleID="s163"><Data ss:Type="String">Colum1</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum2</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum3</Data></Cell>'||
    '<Cell ss:StyleID="s163"><Data ss:Type="String">...</Data></Cell>'||
    '<Cell ss:StyleID="s166"><Data ss:Type="String">ColumN</Data></Cell></Row>';
    for z in (
    select
    a."Col1",
    a."Col2",
    b."Col3",
    b."ColN"
    from table1 a,
    table2 b
    where a.id = b.id
    loop
    l_xml_body := l_xml_body||'<Row><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col1||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col2||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.Col3||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    ... ||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
    z.ColN||'</Data></Cell>';
    l_xml_body := l_xml_body||'</Row>'||chr(10);
    runner := runner + 1;
    end loop;
    l_xml_body := l_xml_body||'</Table>';
    end loop;
    clob_size := dbms_lob.getlength(l_xml_body);
    htp.prn(l_xml_header);
    for i in 1..ceil(clob_size / 32767)
    loop
    l_xml_text := dbms_lob.SUBSTR (l_xml_body, 32767, v_count);
    HTP.prn (l_xml_text);
    v_count := v_count + 32767;
    end loop;
    htp.prn('</Worksheet></Workbook>');
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    EXCEPTION
    WHEN OTHERS
    THEN
    OWA_UTIL.mime_header ('application/octet', FALSE);
    HTP.prn ('Content-Disposition: attachment; filename="Test.xls"');
    OWA_UTIL.http_header_close;
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    end;
    #######################################################

    Thanks for the hint Paul,
    here is my code in code-tags.
    I appreciate any help!
    Johnny
    declare
    l_xml_header varchar2(32767);
    l_xml_body clob;
    l_xml_text varchar2(32767);
    l_xml_footer varchar2(32767);
    runner number;
    clob_size number;
    begin
    runner := 2;
    owa_util.mime_header( 'application/octet', FALSE);
    htp.p('Content-Disposition: attachment; filename="Test.xls"');
    owa_util.http_header_close;
    l_xml_header := '<?xml version="1.0" encoding="utf-8"?>'||chr(10)||
    '<?mso-application progid="Excel.Sheet"?>'||chr(10)||
    '<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:o="urn:schemas-microsoft-com:office:office"'||chr(10)||
    'xmlns:x="urn:schemas-microsoft-com:office:excel"'||chr(10)||
    'xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"'||chr(10)||
    'xmlns:html="http://www.w3.org/TR/REC-html40">'||chr(10)||
    '<DocumentProperties xmlns="urn:schemas-microsoft-com:office:office">'||chr(10)||
    '<Version>1.0</Version>'||chr(10)||
    '</DocumentProperties>'||chr(10)||
    '<ExcelWorkbook xmlns="urn:schemas-microsoft-com:office:excel">'||chr(10)||
    '<WindowHeight>8580</WindowHeight>'||chr(10)||
    '<WindowWidth>15180</WindowWidth>'||chr(10)||
    '<WindowTopX>120</WindowTopX>'||chr(10)||
    '<WindowTopY>45</WindowTopY>'||chr(10)||
    '<ProtectStructure>False</ProtectStructure>'||chr(10)||
    '<ProtectWindows>False</ProtectWindows>'||chr(10)||
    '</ExcelWorkbook>'||chr(10)||
    '<Styles>'||chr(10)||
    '<Style ss:ID="Default" ss:Name="Normal">'||chr(10)||
    '<Alignment ss:Vertical="Bottom"/>'||chr(10)||
    '<Borders/>'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss"/>'||chr(10)||
    '<Interior/>'||chr(10)||
    '<NumberFormat/>'||chr(10)||
    '<Protection/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s22">'||chr(10)||
    '<Font x:Family="Swiss" ss:Bold="1"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s67">'||chr(10)||
    '<Font ss:FontName="Arial" x:Family="Swiss" ss:Color="#FFFFFF"/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s157">'||chr(10)||
    '<Borders/>'||chr(10)||
    '</Style>'||chr(10)||
    '<Style ss:ID="s158">'||chr(10)||
    '<Borders>'||chr(10)||
    '<Border ss:Position="Right" ss:LineStyle="Continuous" ss:Weight="1"/>'||chr(10)||
    '</Borders>'||chr(10)||
    '</Style>'||chr(10)||
    '</Styles>';
    for z in 1..1
    loop
      l_xml_body:=l_xml_body||'<Worksheet ss:Name="Worksheet1"> <Table x:FullColumns="1" x:FullRows="1" ss:DefaultColumnWidth="60">';
      l_xml_body:=l_xml_body||'<Row><Cell ss:StyleID="s163"><Data ss:Type="String">Colum1</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum2</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">Colum3</Data></Cell>'||
                              '<Cell ss:StyleID="s163"><Data ss:Type="String">...</Data></Cell>'||
                              '<Cell ss:StyleID="s166"><Data ss:Type="String">ColumN</Data></Cell></Row>';
      for z in (
      select
       a."Col1",
       a."Col2",
       b."Col3",
       b."ColN"
      from table1 a,
           table2 b
      where a.id = b.id
      loop
          l_xml_body := l_xml_body||'<Row><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col1||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col2||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.Col3||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                             ...  ||'</Data></Cell><Cell ss:StyleID="s157"><Data ss:Type="String">'||
                            z.ColN||'</Data></Cell>';
          l_xml_body := l_xml_body||'</Row>'||chr(10);
          runner := runner + 1;  
    end loop;
        l_xml_body := l_xml_body||'</Table>';
    end loop;
    clob_size           := dbms_lob.getlength(l_xml_body);
    htp.prn(l_xml_header);
    for i in 1..ceil(clob_size / 32767)
    loop
       l_xml_text := dbms_lob.SUBSTR (l_xml_body, 32767, v_count);
       HTP.prn (l_xml_text);
       v_count := v_count + 32767;
    end loop;
    htp.prn('</Worksheet></Workbook>');
    HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
       EXCEPTION
          WHEN OTHERS
          THEN
             OWA_UTIL.mime_header ('application/octet', FALSE);
             HTP.prn ('Content-Disposition: attachment; filename="Test.xls"');
             OWA_UTIL.http_header_close;
             HTMLDB_APPLICATION.g_unrecoverable_error := TRUE;
    end;

  • Mail for large amount of data

    Hey guys,
    I just switched to Mac (and I love it ;)) and now I am searching for a mail program to host my office pop mail account. I get normally around 5000 messages / month, totalling over 3GB, mostly pictures.
    Can Mail handle the amount of data, or will it corrupt my inbox?
    Am I asking because under Windows, OE couldn't handle the amount of data, neither could Thunderbird before the update to 1.5.

    Hi Ernie,
    Thanks for your ideas.
    more below:
    Mick,
    The only place I know for sure that discusses this,
    can be found at:
    http://docs.info.apple.com/article.html?artnum=25812
    Yeah, I read this when I had a couple In Boxes blow a year or two ago. Sadly, it wasn't really up to date and I'd seen no warning or otherwise prior that this could happen. These days I try to keep the IN BOX as slim and trim as possible. Just wish I knew what the real specs were on how much is too much?
    Other than an individual xxxx.mbox folder, I know of
    no limit that would apply, either to the total size
    of the Mail folder, nor to number of messages.
    What's the limit to an individual xxxx.mbox folder?
    Having said that, I never rely solely upon the Mail
    files to archive important attachments. I also keep
    my Mail folder active on more than one Mac, and thus
    on more than one hard drive. I do not, however, sync
    general Sent messages between the various Macs, on
    the theory that other backup practices protect any
    information that I would ever attach to send. Of
    course some On My Mac mailboxes are archiving both
    received and sent messages for certain subject
    areas.
    Sent messages are important to most of the businesses I support so they all need to be there. Nice thought about archiving attachments. I wonder if it's possible to create a rule or automator flow that would do that? Any attachment over 500k would get archived and then deleted from the message. I wonder if that's possible and simple enough...
    My own Mail folder is in excess of 5 GB, and my
    largest individual xxxx.mbox barely exceeds 1 GB.
    Is each folder created "On My Mac" one mbox?
    Again, she has in excess of 20GB and I'm worried about all that weight in the program. Makes me long for the good old days of Eudora which could handle massive sizes like that.
    thanks, Ernie.
    cheers,
    Mick

  • HotSpot optimization for large amounts of array use

    Greetings all,
    I'm hoping some of the people using Java for crypto and/or image processing/media file processing might by wandering by to get some input.
    We are building a secured storage system for media files, currently using the Twofish algorithm (one of the AES candidates that wasn't selected). We are bulk encrypting files for entry into the storage system, and in that process, I was already using Java for some of the management. Thus I decided to use the JCE framework to get the cryptography working.
    I hate to say it, but this Java evangelist is a bit appalled at how slow it seems to go. By suspicion is the array bounds checking in the code, but I haven't gone byte-code snooping yet. I've tried a few of the symmetric algorithms (DES, AES/Rijndael and Twofish) and get the same sort of results. I've also tried the Cryptix library for the Twofish, as the Sun JCE doesn't have Twofish.
    In straight C code, a G4-700MHz and a US-III 750 MHz get around the 20-25 MB/s encoding.
    In Java, I'm getting about 1.3-1.9 MB/s. I reduced the Java code to be raw crypto without using the JCE, and it didn't make a noticable difference.
    That's a BIG gap. Not 10%, not 20%, but about 95% reduction in performance.
    The profiling with -Hprof shows about 90% of the time in the crypto routines, and all of it is compiled via HotSpot.
    Settings are with java 1.3.1 on the G4, and 1.4.0_01 on the UltraSparc-III.
    Settings were -server -Xmx128m -Xms128m
    Does the bounds checking impose that high of a penalty?
    Reading the book Java Platform Performance (excellent book BTW), Steve Wilson and Jeff Kesselman indicate that the array checking could be optimized out in certain loop constructs in HotSpot, but that preliminary examination showed that this would only benefit a narrow range of developers.
    With JMF, JCE, and the increasing capabilities of Java2D and 3D, I expect that this sort of processing will increase dramatically, but only if this level of performance penalty can be removed.
    Anything else I might try?
    Dallas

    Thanks for the reply. I've been looking into it further, but with no real advance. I've implemented a twofish JNI bridge that runs the actual encryption natively, at pretty much native speeds. The array access and twofish cipher does copy all elements locally, and the rest of the cipher is bit manipulation on bytes. I haven't quite gotten deep enough into the actual cipher to see where the performance penalty is being incurred.
    Possibly I'm mistaken in thinking it was the array access, but I didn't expect the raw bit twiddling and bitwise operators to be slow.

  • How do I remove spaces or special characters within a cell for large amounts of data

    Is there any shortcut to remove spaces between words and numbers within a cell?
    Example:
    Current: .5 lt PET (6)
    Need: .5ltPET(6)
    Is there any shortcut to remove special characters between numbers within a cell?
    Example:
    Current: 0--000--000--0
    Need: 00000000

    Thanks Wayne.
    I have been away from using Numbers or Excel for 4-5 years so it is slowly coming back to me. I am get that I need to use the SUBSTITUTE function however I am having trouble with getting it to work.
    My Data
    ST PAULI 12/12 NR
    $27.16
    12oz NR(12)
    0--80660--95937--5
    ST PAULI 4/6/12 NR
    $28.76
    12oz NR(6)
    0--80660--95935--1
    ST PAULI DK 12/12 NR
    $0.00
    12oz NR(12)
    0--000--000--0
    ST PAULI DK 4/6/12 NR
    $28.76
    12oz NR(6)
    0--80660--95945--0
    ST PAULI N/A 4/6/12 NR
    $20.66
    12oz NR(6)
    0--80660--95955--9
    CAYMAN JACK 4/6/12 NR
    $29.12
    12oz NR(6)
    8--15829--01006--8
    CAYMAN JACK 8OZ/12PK CAN
    $23.18
    8oz CAN(12)
    8--15829--01061--7
    TGIF LIIT 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01043--3
    TGIF MARGARITA 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01047--1
    TGIF PINA COLADA 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01045--7
    TGIF STRAWBERRY 10OZ FROZEN POUCH
    $35.80
    10oz POUCH(24)
    8--15829--01042--6
    BALLAST PT BIG EYE IPA 1/2 BBL
    $190.00
    KEG 1984oz (1/2 KEG)
    0--000--000--0
    BALLAST PT BIG EYE IPA 1/6 BBL
    $73.00
    KEG 660.1oz (1/6 KEG)
    0--000--000--0
    BALLAST PT BIG EYE IPA 4/6/12 CAN
    $33.00
    12oz CAN(6)
    6--72438--00052--7
    There are many more but this is enough to show you. I need to remove all spaces from the First and Third Columns. I need to remove all (--) from the fourth. Where do I put in the substitute function and what is source sting, existing-string, new-string, and occurrence.
    Thank You for your help.

  • Exp/Imp alternatives for large amounts of data (30GB)

    Hi,
    I've come into a new role where various test database are to be 'refreshed' each night with cleansed copies of production data. They have been using the Imp/Exp utilities with 10g R2. The export process is ok, but what's killing us is the time it takes to transfer..unzip...and import 32GB .dmp files. I'm looking for suggestions on what we can do to reduce these times. Currently the import takes 4 to 5 hours.
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilities. Are 'Transportable Tablespaces' the next logical solution? I've been reading up on them and could start prototyping/testing the process next week. What else is in Oracle's toolbox I should be considering?
    Thanks
    brian

    Hi,
    I haven't used datapump, but I've heard it doesn't offer much benefit when it comes to saving time over the old imp/exp utilitiesDatapump will be faster for a couple of reasons. It uses direct path to unload the data. DataPump also supports parallel processes, so while one process is exporting metadata, the other processes can be exporting the data. In 11, you can also compress the dumpfiles as you are exporting. (Both data and metadata compression is available in 11, I think metadata compression is available in 10.2). This will remove your zip step.
    As far as transportable tablespace, yes, this is an option. There are some requirements, but if it works for you, all you will be exporting will be the metadata and no data. The data is copied from the source to the target by way of datafiles. One of the biggest requirements is that the tablespaces need to be read only while the export job is running. This is true for both exp/imp and expdp/impdp.

  • HT1926 itunes cannot locate original file for large amount of music in library

    When I open itunes it cannot find some of the tunes I have uploaded onto the library from CD's etc.

    This happens if the file is no longer where iTunes expects to find it. Possible causes are that you or some third party tool has moved, renamed or deleted the file, or that the drive it lives on has had a change of drive letter. It is also possible that iTunes has changed from expecting the files to be in the pre-iTunes 9 layout to post-iTunes 9 layout,or vice-versa, and so is looking in slightly the wrong place.
    Select a track with an exclamation mark, use Ctrl-I to get info, then cancel when asked to try to locate the track. Look on the summary tab for the location that iTunes thinks the file should be. Now take a look around your hard drive(s). Hopefully you can locate the track in question. If a section of your library has simply been moved, or a drive letter has changed, it should be possible to reverse the actions.
    Alternatively, as long as you can find a location holding the missing files, then you should be able to use my FindTracks script to reconnect them to iTunes .
    tt2

  • Sort algorithm for LARGE amount of data?

    hi,
    i need a sorting scheme for the following situation:
    I have a data file where entries are in chunks of variable length. The size of each
    chunk is defined in the first 5 bytes as a string, so the length can be from
    00001-99999, though it is usually around 1000-3000 bytes long. In reality it is never
    over 10000 bytes, but it is possible for it to be.
    Anyways, I need to sort these files according to the data found in certain
    displacements in these chunks. I will be sorting anywhere from 200,000 to
    100,000,000 at a time. Time is an issue certainly, but if it takes a week to finish that is
    fine, i just need it to work.
    So, my problem is that none of the typical sorts will work for me (bubble, heap) as far
    as i can tell because in those sorts i need to have the data loaded into memory, and
    this much data will overload the system. I have used, in the past, a c method that
    feeds these chunks to the sort function a few at a time, then makes files. Hence, not
    all chunks need to be loaded at once. Does anyone know of any solution to this
    problem? Any sort algorithms or sort classes that can handle this much data? thanks!

    Ever tried the radix sort? it's got linear complexity.
    You can still work a chunk at a time, and simply separate the data into several different "buckets", each one identified by, oh, say, the unicode number for the first character in the chunk.
    You now have several smaller lists to sort, and when you're done, NO MERGING IS NECESSARY. Simply append the lists, because the main sets of lists are already sifted into different "buckets".
    Kinda like this:
    create 256 files, and store in each every record that contains a first character that corresponds to it's ascii value. Then create 256 files for each of the original 256 files, and store in each every recond that contains a second character that correstonds to it's second character.
    etc, etc, etc.
    This is very memery intensive for storage, but in terms of run time complexity, it is linear: You will make an explicit number of passes through the list of data. And, as you go along, the lists get shorter and shorter. So while it appears that you are making 256 ^ (max length of data) passes, you're really only making (max length of data) passes, with some additional overhead of creating extra multiple files.
    For that much data, I would definitely recommend a linear algorithm. Any other sorts would be extremely slow.

  • Using Siebel-OPA connector BO mapping for large amount of data

    Hi,
    We plan to use the BO mapping approach to get multiple values from OPA to Siebel, which we plan to store as multiple records in Siebel.
    1. Is it advisable to do so using BO mapping?
    2. Would IO mapping be a better approach, considering the size of data involved?
    Thanks

    nilskil wrote:
    Hi,
    We plan to use the BO mapping approach to get multiple values from OPA to Siebel, which we plan to store as multiple records in Siebel.
    1. Is it advisable to do so using BO mapping?
    2. Would IO mapping be a better approach, considering the size of data involved?
    ThanksFor passing lots of data between OPA and Siebel I would definitely recommend using an IO mapping. You will find it faster and also, the return IO xml will be easier to deal with.
    Cheers
    Frank

Maybe you are looking for