Large amounts of video data 2TB+

Over the next several months the University of Michigan will be videotaping elementary school classes. We hope to tape at least 40 classrooms with two cameras in each classroom for about 90min each. Im estimating I will have 2-3 TB of video files I need to safely store and edit. Im considering buying External 1TB Firewire Hard drives for this. In the past using multiple external HDDs was not the best solution. When I have more than 1 or 2 Firewire devices hooked up I start losing connections.
Can someone help me locate information on best practices for storing and editing large amounts of video files with PPro?

I'd recommend looking at solutions offered by one of these companies. If you contact them and describe what you are trying to accomplish, they can offer suggestions...
http://www.caldigit.com/
http://www.dulcesystems.com/
http://www.sonnettech.com/product/fusiond800raid.html

Similar Messages

  • How to send large amount of XML data in one CLOB variable

    Hi,
    I am sending large amount of XML data to TCP/IP port in one CLOB variable.
    My requirement is to send the whole data in one go in one CLOB variable.
    But that CLOB variable is not sufficient to hold all the data.
    Please suggest some solution.
    Thanks in advance

    Hi Here is my code:
    CREATE OR REPLACE PACKAGE BODY APPS.XXMB_WIP_PROD_TAG_DOOR_PKG
    AS
    PROCEDURE xxmb_get_xml_data_1270 (
    -- errbuf OUT VARCHAR2,
    -- retcode OUT NUMBER,
    p_org IN VARCHAR2,
    p_limit_to_global IN VARCHAR2,
    p_label IN VARCHAR2,
    p_printer IN VARCHAR2,
    p_quantity IN VARCHAR2,
    p_print_method IN VARCHAR2,
    p_enable_release IN VARCHAR2,
    p_enable_serial_no IN VARCHAR2,
    p_release IN VARCHAR2,
    p_rep_group IN VARCHAR2,
    p_cart_type IN VARCHAR2,
    p_cart_no_from IN VARCHAR2,
    p_cart_no_to IN VARCHAR2,
    p_serial_no IN VARCHAR2
    AS
    CURSOR c_xml_data_door (
    p_org IN VARCHAR2,
    p_label IN VARCHAR2,
    p_printer IN VARCHAR2,
    p_quantity IN VARCHAR2,
    p_print_method IN VARCHAR2,
    p_rep_group IN VARCHAR2,
    p_release IN VARCHAR2,
    p_cart_type IN VARCHAR2,
    p_cart_no_from IN VARCHAR2,
    p_cart_no_to IN VARCHAR2,
    p_serial_no IN VARCHAR2
    IS
    SELECT xxasa.item_id AS item_id, xcs.serial_number AS serial_number,xxcpf.cart_type,xcs.destination_cart_num cart,xcs.destination_slot_num slot
    CURSOR c_product_detail (
    l_product IN NUMBER,
    l_serial_num IN VARCHAR2,
    p_limit_to_global IN VARCHAR2
    IS
    SELECT xcra_specie.reference_id AS reference_id,
    xcra_ege.attribute_value AS ege, xcs.item_id AS item_id,
    AND msib.inventory_item_id = l_product
    and xcs.organization_id = nvl(p_org, xcs.organization_id)
    AND xcs.serial_number = NVL (l_serial_num, xcs.serial_number);
    /*-------------------------------------------------------+
    | Cursor to fetch the data for special Message Label |
    +-------------------------------------------------------*/
    CURSOR c_count (p_item_id IN NUMBER)
    IS
    SELECT xcrav.attribute_value, xcs.serial_number, xcs.cabinet_number
    FROM xxmb_czmfg_ref_attributes xcrav,
    cz_config_attributes cca,
    AND msib.organization_id = xcs.organization_id
    AND msib.inventory_item_id = xcs.item_id;
    /*--------------------------+
    | Common variables |
    +--------------------------*/
    v_limit_to_global VARCHAR2 (100);
    l_label_count NUMBER := 1;
    total_rec NUMBER;
    l_rewrite VARCHAR2 (1) := 'N';
    l_file_count NUMBER := 1;
    l_separate_line VARCHAR2 (10);
    BEGIN
    fnd_profile.get ('WMS_LABEL_OUTPUT_DIRECTORY', l_output_dir);
    fnd_profile.get ('WMS_LABEL_FILE_PREFIX', l_output_file_prefix);
    l_request_id := apps.fnd_global.conc_request_id;
    l_output_file_name :=
    l_output_file_prefix || l_request_id || l_file_end;
    l_dir_seperator := '/';
    IF (INSTR (l_output_dir, l_dir_seperator) = 0)
    THEN
    l_dir_seperator := '\';
    END IF;
    v_label := p_label;
    v_printer := p_printer;
    v_quantity := p_quantity;
    V_LIMIT_TO_GLOBAL := P_LIMIT_TO_GLOBAL;
    L_XML_CONTENT := '<?xml version="1.0" encoding="UTF-8" ?>';
    L_XML_CONTENT := L_XML_CONTENT || '<!DOCTYPE labels SYSTEM "label.dtd">';
    L_XML_CONTENT := L_XML_CONTENT || '<labels>';
    FOR r_xml_data_door IN c_xml_data_door (p_org,
    p_label,
    p_printer,
    p_quantity,
    p_print_method,
    p_rep_group,
    p_release,
    p_cart_type,
    p_cart_no_from,
    p_cart_no_to,
    p_serial_no
    LOOP
    -- dbms_output.put_line ( 1 );
    FOR r_product_detail IN
    c_product_detail (r_xml_data_door.item_id,
    r_xml_data_door.serial_number,
    v_limit_to_global
    LOOP
    -- dbms_output.put_line ( 2 );
    -- l_xml_content := '<?xml version="1.0" encoding="UTF-8" ?>';
    -- l_xml_content := l_xml_content || '<!DOCTYPE labels SYSTEM "label.dtd">';
    -- l_xml_content := l_xml_content || '<labels>';
    fnd_file.put_line (fnd_file.LOG, 'label cnt: ' || l_label_count);
    dbms_output.put_line (l_label_count);
    L_XML_CONTENT := L_XML_CONTENT || '<label _FORMAT='
    || '"'
    || 'lib://FRD/'
    || v_label
    || '"'
    || ' _PRINTERNAME='
    || '"'
    || v_printer
    || '"'
    || ' _QUANTITY='
    || '"'
    || v_quantity
    || '"'
    || '>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Color">'
    || R_PRODUCT_DETAIL.COLOR
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT ||'<variable name= "Model">'
    || R_PRODUCT_DETAIL.model
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Build_Date">'
    || R_PRODUCT_DETAIL.BUILD_DATE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Assy_Cart">'
    || R_PRODUCT_DETAIL.ASSY_CART
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Assy_Slot">'
    || R_PRODUCT_DETAIL.ASSY_SLOT
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Assy_Line">'
    || R_PRODUCT_DETAIL.ASSY_LINE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Finish_Cart">'
    || R_PRODUCT_DETAIL.FINISH_CART
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Finish_Slot">'
    || R_PRODUCT_DETAIL.FINISH_SLOT
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Serial_Number">'
    || R_PRODUCT_DETAIL.SERIAL_NO
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Serial_Number_Barcode">'
    || R_PRODUCT_DETAIL.SERIAL_NO
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Specie">'
    || R_PRODUCT_DETAIL.SPECIE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT ||'<variable name= "Truck_Group">'
    || R_PRODUCT_DETAIL.TRUCK_GROUP
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Label_Sequence_No">'
    || L_LABEL_COUNT
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT ||'<variable name= "WIP_Cart">'
    || R_PRODUCT_DETAIL.WIP_CART
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "WIP_Slot">'
    || R_PRODUCT_DETAIL.WIP_SLOT
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Cabinet_Sequence_No">'
    || R_PRODUCT_DETAIL.CAB_SEQ_NO
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "RAW_PART_NO">'
    || R_PRODUCT_DETAIL.RAW_PART_NO
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "JC">'
    || R_PRODUCT_DETAIL.JC
    || '</variable>' ;
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "QC">'
    || R_PRODUCT_DETAIL.QC
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Thickness">'
    || R_PRODUCT_DETAIL.THICKNESS
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Width">'
    || R_PRODUCT_DETAIL.width
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Length">'
    || R_PRODUCT_DETAIL.length
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Overlay">'
    || R_PRODUCT_DETAIL.OVERLAY
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Options">'
    || R_PRODUCT_DETAIL.OPTIONS
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Stop">'
    || R_PRODUCT_DETAIL.stop
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Profile_No">'
    || R_PRODUCT_DETAIL.PROFILE_NO
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Door_Style">'
    || R_PRODUCT_DETAIL.DOOR_STYLE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Glaze">'
    || R_PRODUCT_DETAIL.GLAZE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Shape">'
    || R_PRODUCT_DETAIL.SHAPE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Glass">'
    || R_PRODUCT_DETAIL.GLASS
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Hinge_Side">'
    || R_PRODUCT_DETAIL.HINGE_SIDE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Hinge_Type">'
    || R_PRODUCT_DETAIL.HINGE_TYPE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "EGE">'
    || R_PRODUCT_DETAIL.EGE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Door_Style_Code">'
    || R_PRODUCT_DETAIL.DOOR_STYLE_CODE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Finish_Technique">'
    || R_PRODUCT_DETAIL.FINISH_TECHNIQUE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Hinge_Location">'
    || R_PRODUCT_DETAIL.HINGE_LOCATION
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Construction_Type">'
    || R_PRODUCT_DETAIL.CONSTRUCTION_TYPE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Panel_Type">'
    || R_PRODUCT_DETAIL.PANEL_TYPE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Panel_Profile_No">'
    || R_PRODUCT_DETAIL.PANEL_PROFILE_NO
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Rail_Profile_No">'
    || R_PRODUCT_DETAIL.RAIL_PROFILE_NO
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Rail_1_Length">'
    || R_PRODUCT_DETAIL.RAIL_1_LENGTH
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Stile_Profile_No">'
    || R_PRODUCT_DETAIL.STILE_PROFILE_NO
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Rail_2_Length">'
    || R_PRODUCT_DETAIL.RAIL_2_LENGTH
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Stile_1_Length">'
    || R_PRODUCT_DETAIL.STILE_1_LENGTH
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Stile_2_Length">'
    || R_PRODUCT_DETAIL.STILE_2_LENGTH
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Panel_1_Width">'
    || R_PRODUCT_DETAIL.PANEL_1_WIDTH
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Panel_1_Length">'
    || R_PRODUCT_DETAIL.PANEL_1_LENGTH
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Panel_2_Width">'
    || R_PRODUCT_DETAIL.PANEL_2_WIDTH
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Panel_2_Length">'
    || R_PRODUCT_DETAIL.PANEL_2_LENGTH
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT ||'</label>';
    /*-----------------------------------------+
    | Handling XML data for special message |
    +-----------------------------------------*/
    FOR rec_count IN c_count (r_product_detail.item_id)
    LOOP
    L_XML_CONTENT := L_XML_CONTENT || '<label _FORMAT='
    || '"'
    || 'lib://FRD/SpecMessage_Door.btw'
    || '"'
    || ' _PRINTERNAME='
    || '"'
    || v_printer
    || '"'
    || ' _QUANTITY='
    || '"'
    || v_quantity
    || '"'
    || '>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Serial_Number">'
    || REC_COUNT.SERIAL_NUMBER
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Special_Message">'
    || REC_COUNT.ATTRIBUTE_VALUE
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT || '<variable name= "Cabinet_Sequence_No">'
    || REC_COUNT.CABINET_NUMBER
    || '</variable>';
    L_XML_CONTENT := L_XML_CONTENT ||'</label>';
    EXIT WHEN c_count%NOTFOUND;
    end LOOP;
    -- L_XML_CONTENT := L_XML_CONTENT || '</labels>';
    fnd_file.put_line (fnd_file.LOG, l_xml_content);
    dbms_output.put_line ( l_xml_content );
    L_LABEL_COUNT := L_LABEL_COUNT + 1;
    -- apps.inv_print_request.sync_print_tcpip (l_xml_content,
    -- l_job_status,
    -- l_printer_status,
    -- l_status_type,
    -- l_return_status,
    -- l_return_msg
    END LOOP;
    END LOOP;
    l_xml_content := l_xml_content || '</labels>';
    fnd_file.put_line (fnd_file.LOG, l_xml_content);
    apps.inv_print_request.sync_print_tcpip (l_xml_content,
    l_job_status,
    l_printer_status,
    l_status_type,
    l_return_status,
    l_return_msg
    L_XML_CONTENT := null;
    /*--------------------------------------------------------------------------------------+
    | APPS.INV_PRINT_REQUEST.SYNC_PRINT_TCPIP will send the XML data to TCP/IP Port |
    +--------------------------------------------------------------------------------------*/
    fnd_file.put_line (fnd_file.LOG,
    'Printer Status:' || ' ' || l_printer_status
    fnd_file.put_line (fnd_file.LOG,
    'Return Status:' || ' ' || l_return_status
    fnd_file.put_line (fnd_file.LOG,
    'Return Message:' || ' ' || L_RETURN_MSG
    COMMIT;
    EXCEPTION
    WHEN OTHERS
    THEN
    fnd_file.put_line
    (fnd_file.LOG,
    'Unexpected error in the xxmb_get_xml_data_1270 procedure, error is : '
    || SQLERRM
    || ', '
    || SQLCODE
    END xxmb_get_xml_data_1270;
    END xxmb_wip_prod_tag_door_pkg;
    /

  • Large Amount of text data in a Field

    I have a VB front end application and now need to store what could be a very large amount of text data in one field (ie more that a varchar field can hold) What data type could i use for the field and what is the capacity of this field
    Thanks

    hi
    BFILE is a data type in Oracle that allows you to store the location and name of any file. To store large anounts of text data, you better use this type. You can store all the txt into a '.dat" or".txt" or still ".rtf" and then save the file name and its location in the Oracle Database. I believe that ione can save upto 4gb of Data using this type. I never used this data tye, but just read about it in documentation. Hope it works.
    All the best.
    Kiranmayee

  • Extremely Slow USB 3.0 Speeds When Transferring Large Amounts of Video

    Hi there,
    I am transferring large amounts of footage (250GB-1.75TB chunks) from 5x 5400rpm 2TB drives to 5x 7200rpm 2TB drives simultaneously (via 6x USB 3.0 connections and 4x SATA III, with copy/paste in explorer) and the transfer speeds are incredibly slow. Initially the speeds show up as quite fast (45-150mb/ps+) but then they slow down to around 3mb/ps.
    The drives have not been manually defragmented but the vast majority of the files on each are R3D video files.
    I am wondering if the amount of drives/data being used/sent is what is causing such slow speeds or if there might be another culprit? I would be incredibly appreciative to learn of any solutions to increase speed significantly. Many thanks...
    Specs:
    OS: Windows 7 Professional
    Processor: i7 4790k
    RAM: 32GB
    GPU: Nvidia 970 GTX

    If the USB ports are all on the same controller, they share their resources, so the transfer rate with 6 ports would be max 1/6-th of the transfer rate with 1 USB port if we disregard the overhead. Add that overhead to the equation and the transfer rate goes down even further. Now take into account the fact that you are copying from slow 5400 RPM disks that effectively max out at around 80 MB/s with these chunks and high latency, add the OS overhead and these transfer rates do not surprise me.

  • I have a large amount of "Other" data on my iPhone 5 after the last 2 updates. Why and how do I get rid of it?

    After I sync my phone, it has a large amount of data under the category-"other".  About 11gigs. This is an issue. Does anyone know why and how I can delete it?  I think it's from iMessage, email, etc..., but I have deleted lots of it with no result.

    You might look at PhoneClean
    <http://www.imobie.com/phoneclean/download.htm>

  • SharePoint Library for Large Amounts of Engineering Data

    We are currently using traditional project directory folders for large projects with sometimes tens of thousands of documents. 
    We are planning on migrating the data to SharePoint and the path forward in unclear.
    Initially it was recommended to use a library, not numerous folders, to contain the data so that searching of data in improved. 
    That sounded great.  The 1<sup>st</sup> project used to pilot this for other project is divided into 20 different modification packages. 
    A library category was created for MODS with selectable options of the 20 mod package names and “No Defined” (default value). 
    Some data items are shared between more than one MOD so this category can have more than one assignment.
    When we looked at the directory structure in place we found no consistency in folder names, no consistency in directory structure. 
    Many folders have 5 or 6 (or more) levels of subdirectories. 
    Ideally we want no more than 4 or 5 categories of meta data to define all data. 
    Mapping from chaos into a comparatively small number of categories is daunting.
    When searching this forum I find that libraries should be limited to 2,000 items. 
    There are tens of thousands of items in our pilot project. 
    Surely someone somewhere has encountered this organizational problem. 
    I could use some advice from someone who have been there before.

    John,
    The limit of 2000 is not a hard limit, the actual no of items you can store in a list is 30,000,000. however more item would have impact on performance on rendering and lock on the SQL table.
    Also the limit that you have mentioned (2000) is list view threshold limit and  actually it is 5000.
    One important aspect is Boundaries are hard limit, which you cannot exceed and Supported limits are limits based on tests, which can be exceeded but may cause issues.
    Being said that , I would suggest you to check out this link on
    SharePoint Server 2010 capacity management: Software boundaries and limits
    http://technet.microsoft.com/en-us/library/cc262787(v=office.14).aspx
    and explore other ways of optimizing your list
    here are some references that would help you to optimize -
    http://office.microsoft.com/en-us/sharepoint-foundation-help/manage-lists-and-libraries-with-many-items-HA010377496.aspx
    http://technet.microsoft.com/en-us/library/cc262813(v=office.14).aspx
    http://office.microsoft.com/en-us/sharepoint-server-help/sharepoint-lists-v-techniques-for-managing-large-lists-RZ101874361.aspx
    Hope this helps!
    Ram - SharePoint Architect
    Blog - http://www.SharePointDeveloper.in
    Please vote or mark your question answered, if my reply helps you

  • Managing larger amount of localized data

    Hello!
    (This is a bit long and maybe fuzzy, it's 1:30AM here)
    I have this idea about making a webapp with (among other things) three drop down lists where people can select;
    * country
    * state
    * city
    That is quite easy to solve.
    After someone for example selects "Canada" as country I have a JavaScript to pull some xml/json from my web-app with all the states of Canada.
    Same thing happens when they select a state, I get a list with cities within the state.
    However, here comes the tricky part:
    I want to localize the countries/states/cities in english, french and spanish (and possibly more later, portugese).
    Suddenly I have a load of new data to manage and I have no clue how to structure this in a good way. :(
    First I thought I put everything in arrays in a servlet, but that results in huge classes and not all data needs to be in memory all the time.
    Second I thought I put the data in a database and structure it up in 3 tables (countries, states, cities) but then I got stuck on how to make a good table structure which is manageable with TopLink/EclipseLink.
    Example:
    CREATE TABLE countries
    countryid CHAR( 2 ) NOT NULL, -- 'us', 'mx', 'ca'
    locale CHAR( 2 ) NOT NULL, -- 'en', 'fr', 'es'
    countryname VARCHAR( 64 ) NOT NULL,
    CONSTRAINT countries_id_pk PRIMARY KEY ( countryid, locale )
    CREATE TABLE states
    stateid INTEGER NOT NULL, -- same for same state in different locales
    countryid CHAR( 2 ) NOT NULL, -- 'us', 'mx', 'ca'
    locale CHAR( 2 ) NOT NULL, -- 'en', 'fr', 'es'
    state VARCHAR( 64 ) NOT NULL,
    CONSTRAINT states_id_pk PRIMARY KEY ( stateid, countryid, locale )
    ... and so on..
    The above examples is a pain to deal with due to the JPA idea of using embedded classes for composite primary keys. It becomes much juggling with objects there :(
    I also need a method to reliably generate seqence numbers for 'stateid' which are the same across languages, and I haven't found a method in JPA which allows me to generate sequence number when I want to. :(
    Or is there a better method to organize the data?

    apalsson wrote:
    jschell wrote:What is huge?Putting all countries, their states and the states all cities as constants in a class file.
    It's going to be a very large class which also contains data not necessary all the time.
    I doubt that is going to happen. For starters it is unlikely that your market supports that and even less likely that your application does.
    Not to mention that maintaining that for the entire world might just possibly be a full time job.
    And it still probably doesn't take that much space. After all 1 meg can hold 13,000+ 80 character values.
    >
    That is a bit problematic.
    There are two sides: user data and storage.
    For displaying the names you need something to pull the data from. If you want to localize then you will need a localization value.
    >
    I am using two methods to select the locale, first is by user preferences and the second is what language is configured in their www-browser.
    To select a country I use the ISO-standard name ("mx", "ca" or "us") and then depending on what locale the user has selected it can be "Mexico" (for "en" locale), "México" (for "es" locale" or "Mexico" (for "fr" locale).
    MIght want to be careful with that. If I am in Arizona and running a shop with employees who principal language is spanish-mexican I don't want to have to specify that my country is Mexico, because of course it isn't.
    {quote:title=jschell wrote:}
    There are business drivers though. Have you considered what happens when there is a name change? The names do change. If you print a report for a year ago should it display the new name or the old name? A new name means that the report doesn't match what you printed a year ago. A old name might really annoy someone. Might even be illegal.
    {quote}
    True. That is why it is easier to have the data in a database instead of hardcoded in a class.The fact that it is in a database doesn't change what I said.
    My problem is how to organize it though. It's giving me a headache but now I realize that my question should really be asked in a database forum and maybe not a design pattern forum. :)It is a data problem not a database problem. Your data model drives the persistence model.

  • Horizontal scaling, with large amounts of binary data

    question about horizontal scaling. tried asking late last night, but no one was active. Basically, I have an app that needs to scale (by adding new machines all talking to a database in the backend). This is all fine, but I have some binary file storage requirements for the app (files over 80 megs in size). This introduces a concurency issue, as I can't store this binary data on any of the individual server (because then it will be on one, and not all, leaving the app in an inconsistent state). So where do I store the data to enforce a consistent state? Have the individual apps FTP the file to a central location?
    I am trying to avoid storing binary data in the database; Does anyone have any suggestions on how to address this problem?

    I understand why you are trying to avoid storing binary data in a database but if you need to ensure that this data cannot be modified without the appropriate restrictions then using a database might make sense. You could even have a separate database just for the binary data because you will need to ensure you get the block sizing correct. Also, some databases might be better than others in this case. For example Oracle is likely to be significantly better than MySQL.
    If you do want to use files then you need to put the file in a central location and enforce locking the file to prevent concurrent modification. You can probably tie into a protocol that automatically handles this for you.

  • After syncing, iTunes shows a large amount of "other" data

    Does anyone know what this means and how to fix it? I should have 3 gigs of apps but I only have 0.05.....

    I have the same issue - after the last sync my "other" was blown up to 3 GB. I have tried already several syncs with also de-activating apps/ videos etc. But nothing works. Any other idea than the restoring?

  • Capturing Large Amounts of Video

    I work for a web series where we record 4 hours of video a day, edit many clips out of the 4 hour show, and post these various short clips on YouTube.
    They have always been on PC, and I am starting to convert them to Macs.
    They bought a Mac Pro, and an Elgato Video Capture device. The Elgato is nice because it allows me to capture the show while working on other projects and editing on Final Cut Pro. The problem is, the Elgato saves the files as MPEG-4 which makes editing them in FCP absolutely unbearable as it has to render ever 5 minutes.
    What do you guys recommend I do?
    I need to be able to capture 4 hours each day, while editing other projects at the same time on FCP. And the video files I capture need to be editing files that won't need to be rendered after every little change.
    Someone recommenced the:
    AJA Kona 3
    http://www.bhphotovideo.com/c/product/417388-REG/AJAKONA_3_Kona_3_12_10_Bit_HDSD.html
    and I would love this but it is out of their price range. Any other, more affordable options?
    Thank you very much for your time.
    Andrew

    Is there any way I could still edit on FCP while capturing video with those cards?
    No...because FCP is in use capturing the footage. No NLE allows for that.
    Would it be capturing via Log and Capture in FCP, or a third party software?
    FCP.
    Could I capture it via iMovie while editing video in FCP?
    iMovie doesn't recognize these cards...and it captures in completely different ways than FCP, so no.
    Shane

  • Query about clustering unrelated large amounts of data together vs. keeping it separate.

    I would like to ask the talented enthusiasts who frequent the devolper network to tell me if I have understood how Labview deals with clusters. A generic description of a situation involving clusters and what I believe Labview does is shown below. An example of this type of situation is shown for generating the Fibonacci sequence is attached to illustrate what I am saying.
    A description of the general situation:
    A cluster containing several different variables (mostly unrelated) has one or two of these variables unbundled for immediate use and then the modified values bundled back into the cluster for later use.
    What I think Labview does:
    As the original cluster is going into the unbundle (to get original variable values) and the bundle (to update stored variable values) a duplicate of the entire cluster is made before picking out the individual values chosen to be unbundled. This means that if the cluster also contains a large amount of unrelated data then processor time is wasted duplicating this data.
    If on the other hand this large amount of data is kept separate then this would not happen and no processor time is wasted.
    In the attached file the good method does have the array (large amount of unrelated data) within the cluster and does not use the array in more than one place, so it is not duplicated. If tunnels were used instead, I believe at least one duplicate is made.
    Am I correct in thinking that this is the behaviour Labview uses with clusters? (I expected Labview only to duplicate the variable values chosen in the unbundle code object only. As this choice is fixed at compile time it would seem to me that the compiler should be able to recognise that the other cluster variables are never used.)
    Is there a way of keeping the efficiency of using many separate variables (potentialy ~50) whilst keeping the ease of using a single cluster variable over using separate variables?
    The attachment:
    A vi that generates the Fibonacci sequence (the I32 used wraps at ~44th value, so values at that point and later are wrong) is attached. The calculation is itterative using a for loop. 2 variables are needed to perform the iteration which are stored in a cluster (and passed from iteration to iteration within the cluster). To provide the large amount of unrelated data, a large array of reasonably sized strings is provided.
    The bad way is to have the array stored within the cluster (causing massive overhead). The good way is to have the array separate from the other pieces of data, even if it passes through the for loop (no massive overhead).
    Try replacing the array shift registers with tunnels in the good case and see if you can repeat my observation that using tunnels causes overhead in comparison to shift registers whenever there is no other reason to duplicate the array.
    I am running Labview 7 on windows 2000 with sufficient memory so that the page file is not used in this example.
    Thank you all very much for your time and for sharing your Labview experience,
    Richard Dwan
    Attachments:
    Fibonacci_test.vi ‏71 KB

    > That is an interesting observation you have made and seems to me to be
    > quite inexplicable. The trick is interesting but not practical for me
    > to use in developing a large piece of software. Thanks for your input
    > - I think I'll be contacting technical support for an explaination
    > along with some other anomolies involving large arrays that I have
    > spottted.
    >
    The deal here is that the bundle and unbundle nodes must be very careful
    when they are swapping elements around. This used to make copies in the
    normal cases, but that has been improved. The reason that the sequence
    affects it is that it affects the algorithm so that it orders the
    element movement so that the algorithm succeeds in avoiding a copy.
    Another, more obvious way
    is to use a regular bundle and unbundle, not
    the named variety. These tend to have an easier time in the algorithm also.
    Technically, I'd report the diagram to tech support to see if the named
    bundle/unbundle case can be handled as well. In the meantime, you can
    leave the data unbundled, as in the faster version.
    Greg McKaskle

  • Need to update a table that contains large volume of xml data

    Hi,
    i want to update a table that contains large amount of XML data.
    when execute the query it shows an error .
    Xml parsing is failed .But tghe data in xml is well formed.don't know why its happening .
    Pls help me on this.
    Thanks,
    Fahad

    below is my code..
    pls do the needful.
    create or replace
    PROCEDURE SPFETCHRETRIEVEDATA (
        p_txteordernum IN trnorderitem.TXTEORDERNUM%TYPE,
        p_intversionnum IN trnorderitem.INTVERSIONNUM%TYPE ,
        p_interrorcode OUT NUMBER)
        AS
        ------variable declaration---
        v_xmlorderitem XMLTYPE;
        v_trnsiebelmodification XMLTYPE;
        diff XMLTYPE;
            BEGIN
                BEGIN
                select xmlorderitemxml into v_xmlorderitem
                from trnorderitem
                where TXTEORDERNUM= p_txteordernum
                AND INTVERSIONNUM= p_intversionnum;
                  END;
               --insert into tempxml values ('xmlorderitem',v_xmlorderitem);commit;
                BEGIN
                SELECT TrnSiebelModificationXML into v_trnsiebelmodification
                from trnsiebelmodification
                where TXTEORDERNUM= p_txteordernum
                AND INTVERSIONNUM= p_intversionnum
                AND TXTSIEBELFIELDNAME='Asset XML';
              --  insert into tempxml values ('trnsiebelmodification',v_trnsiebelmodification);commit;
    --            EXCEPTION
    --            WHEN TOO_MANY_ROWS THEN
    --            dbms_output.put_line('Statement return multiple rows');
                 END;
    --------comparing differences between xml data and storing into a variable -----------
               BEGIN
               select xmldiff(v_xmlorderitem, v_trnsiebelmodification)
               into   diff
               from   dual;
               --insert into tempxml values ('diffxml',diff);commit;
               if diff IS NOT NULL THEN
               UPDATE trnsiebelmodification
                SET TXTACTIONTYPE='Update2'
                WHERE TXTEORDERNUM= p_txteordernum
                AND INTVERSIONNUM= p_intversionnum
                 AND TXTSIEBELFIELDNAME='Asset XML';
                ELSE
                UPDATE trnsiebelmodification
                SET TXTACTIONTYPE='No Change2'
                WHERE TXTEORDERNUM= p_txteordernum
                AND INTVERSIONNUM= p_intversionnum
                 AND TXTSIEBELFIELDNAME='Asset XML';
                END IF;
                END;
        END SPFETCHRETRIEVEDATA;Edited by: BluShadow on 11-Sep-2012 14:13
    added {noformat}{noformat} tags. Please read: {message:id=9360002} and learn to do this yourself.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                               

  • Is there any way to connect time capsule to a MacBook Pro directly via USB. I have a large amount of data that I want to back up and it is taking a very long time (35GB is taking 3 hrs, I have 2TB if files in total)...)?

    Perhaps via USB. I have a large amount of data that I want to back up and it is taking a very long time (35GB is taking 3 hrs, I have 2TB if files in total)...? I want to use TimeCapsule as back-up for an archive which is curently stored on a 2 TB WESC HD. 

    No, you cannot backup via direct usb connection..
    But gigabit ethernet is much faster anyway.. are you connected directly by ethernet?
    Is the drive you are backing up from plugged into the TC? That will slow it down something chronic.. plug that drive in by its fastest connection method.. WESC sorry I have no idea. If ethernet use that.. otherwise USB direct to the computer.. always think what way the files come and go.. but since you are copying from the computer everything has to go that way.. it makes things slower if they go over the same cable.. if you catch the drift.

  • Freeze when writing large amount of data to iPod through USB

    I used to take backups of my PowerBook to my 60G iPod video. Backups are taken with tar in terminal directly to mounted iPod volume.
    Now, every time I try to write a big amount of data to iPod (from MacBook Pro), the whole system freezes (mouse cursor moves, but nothing else can be done). When the USB-cable is pulled off, the system recovers and acts as it should. This problem happens every time a large amount of data is written to iPod.
    The same iPod works perfectly (when backupping) in PowerBook and small amounts of data can be easily written to it (in MacBook Pro) without problems.
    Does anyone else have the same problem? Any ideas why is this and how to resolve the issue?
    MacBook Pro, 2.0Ghz, 100GB 7200RPM, 1GB Ram   Mac OS X (10.4.5)   IPod Video 60G connected through USB

    Ex PC user...never had a problem.
    Got a MacBook Pro last week...having the same issues...and this is now with an exchanged machine!
    I've read elsewhere that it's something to do with the USB timing out. And if you get a new USB port and attach it (and it's powered separately), it should work. Kind of a bummer, but, those folks who tried it say it works.
    Me, I can upload to Ipod piecemeal, manually...but even then, it sometimes freezes.
    The good news is that once the Ipod is loaded, the problem shouldnt' happen. It's the large amounts of data.
    Apple should DEFINITELY fix this though. Unbelievable.
    MacBook Pro 2.0   Mac OS X (10.4.6)  

  • Creation of data packages due to large amount of datasets leads to problems

    Hi Experts,
    We have build our own generic extractor.
    When data packages (due to large amount of datasets) are created, different problems occur.
    For example:
    Datasets are now doubled and appear twice, one time in package one and a second time in package two. Since those datsets are not identical, information are lost while uploading those datasets to an ODS or Cube.
    What can I do? SAP will not help due to generic datasource.
    Any suggestion?
    BR,
    Thorsten

    Hi All,
    Thanks a million for your help.
    My conclusion from your answers are the following.
    a) Since the ODS is Standard - within transformation no datasets are deleted but aggregated.
    b) Uploading a huge amount of datasets is possible in two ways:
       b1) with selction criteria in InfoPackage and several uploads
       b2) without selction criteria in InfoPackage and therefore an automatic split of datasets in data packages
    c) both ways should have the same result within the ODS
    Ok. Thanks for that.
    So far I have only checked the data within PSA. In PSA number of datasets are not equal for variant b1 and b2.
    Guess this is normal technical behaviour of BI.
    I am fine when results in ODS are the same for b1 and b2.
    Have a nice day.
    BR,
    Thorsten

Maybe you are looking for