Handle big amount of data

Hello,
I have to analyse a great amount of data (more than 20MByte), that I read from a logfile. I can't split these data into smaller parts because some of my analysis-methods need all data (regression,....).
Are there any tricks for LabVIEW (5.0.1), how to handle big amounts of data?
hans

You might be able to do as you would like. If analysing process needs
all amont of data,
the whole process takes a couple of minutes but no problem. It gives you
no information
on the way of process like as hanging, so using "read files", not "read
from spread
sheet files", is better to have confirming process including the graphic
monitor involved
in the analysing vi that you like to build.
Thanks in advance,
Tom
hans wrote:
> Hello,
>
> I have to analyse a great amount of data (more than 20MByte), that I
> read from a logfile. I can't split these data into smaller parts
> because some of my analysis-methods need all data (regression,....).
>
> Are there any tricks for LabVIEW (5.0.1), how to handle big amounts of
> data?
>
> hans

Similar Messages

  • How can I return a big amount of data from a stored procedure?

    How can I return a big amount of data from a stored procedure in a efficient way ?
    For example not using a cursor for going through all the rows and then assign the values to variables.
    thanks in advance!

    Let's see if I'm able to explain myself..
    I have a SQL query with..about 190.000 (in the best fo the the options)
    I have something like this in my stored procedure..
    FOR REC IN trn_adj_no
    LOOP
    REC_PIPE.INSTANCE_ID:=REC.INSTANCE_ID;
    REC_PIPE.PROCESS_ID:=REC.PROCESS_ID;
    REC_PIPE.ENTITY_TYPE:='TRANSACTION';
    REC_PIPE.CRES_FILENAME:=REC.CRES_FILENAME;
    REC_PIPE.CHUNK_REVISION_SK:=P_CHUNK_REVISION_SK;
    REC_PIPE.CHUNK_ID:=trim(p_chunk_id);
    REC_PIPE.FCL_SK:=REC.FCL_SK;
    REC_PIPE.FCL_TRADE_SK:=REC.FCL_TRADE_SK;
    REC_PIPE.FCL_VALUATION_SK:=REC.FCL_VALUATION_SK;
    REC_PIPE.FCL_BUSINESS_GROUP:=REC.FCL_BUSINESS_GROUP;
    REC_PIPE.ABS_TRN_CD:=REC.ABS_TRN_CD;
    REC_PIPE.ACC_INTEREST_IMP_LOAN_CCY_IFRS:=REC.ACC_INTEREST_IMP_LOAN_CCY_IFRS;
    REC_PIPE.ACC_INTEREST_IMP_LOAN_IFRS:=REC.ACC_INTEREST_IMP_LOAN_IFRS;
    REC_PIPE.ACCRUED_INT_AMT:=REC.ACCRUED_INT_AMT;
    REC_PIPE.ACCRUED_INT_CCY_CD:=REC.ACCRUED_INT_CCY_CD;
    REC_PIPE.AMORT_ID:=REC.AMORT_ID;
    REC_PIPE.AMORT_IND:=REC.AMORT_IND;
    REC_PIPE.ATI:=REC.ATI;
    REC_PIPE.ATI_2:=REC.ATI_2;
    REC_PIPE.ATI_2_AMT:=REC.ATI_2_AMT;
    REC_PIPE.ATI_2_CCY_CD:=REC.ATI_2_CCY_CD;
    REC_PIPE.ATI_AMT:=REC.ATI_AMT;
    REC_PIPE.ATI_CCY_CD:=REC.ATI_CCY_CD;
    REC_PIPE.AVG_MTH_DUE_AMT:=REC.AVG_MTH_DUE_AMT;
    REC_PIPE.AVG_MTH_DUE_CCY_CD:=REC.AVG_MTH_DUE_CCY_CD;
    REC_PIPE.AVG_YTD_DUE_AMT:=REC.AVG_YTD_DUE_AMT;
    REC_PIPE.AVG_YTD_DUE_CCY_CD:=REC.AVG_YTD_DUE_CCY_CD;
    REC_PIPE.BAL_SHEET_EXP_AMT:=REC.BAL_SHEET_EXP_AMT;
    REC_PIPE.BAL_SHEET_EXP_AMT_IFRS:=REC.BAL_SHEET_EXP_AMT_IFRS;
    REC_PIPE.BAL_SHEET_EXP_CCY_CD:=REC.BAL_SHEET_EXP_CCY_CD;
    REC_PIPE.BAL_SHEET_EXP_CCY_CD_IFRS:=REC.BAL_SHEET_EXP_CCY_CD_IFRS;
    REC_PIPE.BAL_SHEET_EXP_EUR_AMT:=REC.BAL_SHEET_EXP_EUR_AMT;
    REC_PIPE.BAL_SHEET_EXP_EUR_AMT_IFRS:=REC.BAL_SHEET_EXP_EUR_AMT_IFRS;
    REC_PIPE.BEN_CCDB_ID:=REC.BEN_CCDB_ID;
    REC_PIPE.BEN_CD:=REC.BEN_CD;
    REC_PIPE.BEN_SRC_CD:=REC.BEN_SRC_CD;
    REC_PIPE.BOOK_CD:=REC.BOOK_CD;
    REC_PIPE.BOOK_TYPE_CD:=REC.BOOK_TYPE_CD;
    REC_PIPE.BOOK_VAL_AMT:=REC.BOOK_VAL_AMT;
    REC_PIPE.BOOK_VAL_AMT_IFRS:=REC.BOOK_VAL_AMT_IFRS;
    REC_PIPE.BOOK_VAL_CCY_CD:=REC.BOOK_VAL_CCY_CD;
    REC_PIPE.BOOK_VAL_CCY_CD_IFRS:=REC.BOOK_VAL_CCY_CD_IFRS;
    REC_PIPE.BOOK_VAL_US_GAAP_AMT:=REC.BOOK_VAL_US_GAAP_AMT;
    REC_PIPE.BRANCH_ID:=REC.BRANCH_ID;
    REC_PIPE.BRANCH_LOC_CD:=REC.BRANCH_LOC_CD;
    REC_PIPE.BS_COMPEN_CD:=REC.BS_COMPEN_CD;
    REC_PIPE.BUS_AREA_UBR_ID:=REC.BUS_AREA_UBR_ID;
    REC_PIPE.CA_CD:=REC.CA_CD;
    REC_PIPE.CAD_RISK_WEIGHT_PCT:=REC.CAD_RISK_WEIGHT_PCT;
    REC_PIPE.CASH_SETTLE_IND:=REC.CASH_SETTLE_IND;
    REC_PIPE.CLS_FLG:=REC.CLS_FLG;
    REC_PIPE.COB_DT:=REC.COB_DT;
    REC_PIPE.COL_AGMENT_SRC_CD:=REC.COL_AGMENT_SRC_CD;
    REC_PIPE.CONSOLIDATION_PCT:=REC.CONSOLIDATION_PCT;
    REC_PIPE.CONTRACT_AMT:=REC.CONTRACT_AMT;
    REC_PIPE.CONTRACT_COUNT:=REC.CONTRACT_COUNT;
    REC_PIPE.COST_UNSEC_WORKOUT_AMT:=REC.COST_UNSEC_WORKOUT_AMT;
    REC_PIPE.CPTY_CCDB_ID:=REC.CPTY_CCDB_ID;
    REC_PIPE.CPTY_CD:=REC.CPTY_CD;
    REC_PIPE.CPTY_LONG_NAME:=REC.CPTY_LONG_NAME;
    REC_PIPE.CPTY_SRC_PROXY_UBR_ID:=REC.CPTY_SRC_PROXY_UBR_ID;
    REC_PIPE.CPTY_TYPE_CD:=REC.CPTY_TYPE_CD;
    REC_PIPE.CPTY_UBR_ID:=REC.CPTY_UBR_ID;
    REC_PIPE.CREDIT_LINE_NET_IND:=REC.CREDIT_LINE_NET_IND;
    REC_PIPE.CRES_SYS_ID:=REC.CRES_SYS_ID;
    REC_PIPE.CTRY_RISK_PROVISION_BAL_AMT:=REC.CTRY_RISK_PROVISION_BAL_AMT;
    REC_PIPE.CUR_NOTIONAL_AMT:=REC.CUR_NOTIONAL_AMT;
    REC_PIPE.CUR_NOTIONAL_CCY_CD:=REC.CUR_NOTIONAL_CCY_CD;
    REC_PIPE.CUR_NOTIONAL_CCY_CD_IFRS:=REC.CUR_NOTIONAL_CCY_CD_IFRS;
    REC_PIPE.CUR_NOTIONAL_AMT_IFRS:=REC.CUR_NOTIONAL_AMT_IFRS;
    REC_PIPE.DB_ENTITY_TRANSFER_ASSETS_CD:=REC.DB_ENTITY_TRANSFER_ASSETS_CD;
    REC_PIPE.DEAL_TYPE_CD:=REC.DEAL_TYPE_CD;
    REC_PIPE.DECOMPOSITION_IDENTIFIER:=REC.DECOMPOSITION_IDENTIFIER;
    REC_PIPE.DEF_LOAN_FEE_IFRS:=REC.DEF_LOAN_FEE_IFRS;
    REC_PIPE.DEF_LOAN_FEE_USGAAP:=REC.DEF_LOAN_FEE_USGAAP;
    REC_PIPE.DELIVERY_DT:=REC.DELIVERY_DT;
    REC_PIPE.DERIV_NOT_DT:=REC.DERIV_NOT_DT;
    REC_PIPE.DERIV_NOT_IND:=REC.DERIV_NOT_IND;
    REC_PIPE.DESK:=REC.DESK;
    REC_PIPE.DIVISION_UBR_ID:=REC.DIVISION_UBR_ID;
    REC_PIPE.ENTITY_CCDB_ID:=REC.ENTITY_CCDB_ID;
    REC_PIPE.FAC_CD:=REC.FAC_CD;
    REC_PIPE.FAC_LIMIT_CD:=REC.FAC_LIMIT_CD;
    REC_PIPE.FAC_LIMIT_SRC_CD:=REC.FAC_LIMIT_SRC_CD;
    REC_PIPE.FAC_SRC_CD:=REC.FAC_SRC_CD;
    REC_PIPE.FAIR_VAL_CD_IFRS:=REC.FAIR_VAL_CD_IFRS;
    REC_PIPE.FED_CLASS_CD:=REC.FED_CLASS_CD;
    REC_PIPE.FIRST_RISK_PROVISION_DT:=REC.FIRST_RISK_PROVISION_DT;
    REC_PIPE.FV_IMP_LOSS_CRED_CCY_IFRS:=REC.FV_IMP_LOSS_CRED_CCY_IFRS;
    REC_PIPE.FV_IMP_LOSS_CRED_IFRS:=REC.FV_IMP_LOSS_CRED_IFRS;
    REC_PIPE.FV_IMP_LOSS_CRED_YTD_CCY_IFRS:=REC.FV_IMP_LOSS_CRED_YTD_CCY_IFRS;
    REC_PIPE.FV_IMP_LOSS_CRED_YTD_IFRS:=REC.FV_IMP_LOSS_CRED_YTD_IFRS;
    REC_PIPE.GEN_RISK_PROVISION_BAL_AMT:=REC.GEN_RISK_PROVISION_BAL_AMT;
    REC_PIPE.GL_PROFIT_CENTRE:=REC.GL_PROFIT_CENTRE;
    REC_PIPE.GLOBAL_GL_CD:=REC.GLOBAL_GL_CD;
    REC_PIPE.IFRS_POSITION:=REC.IFRS_POSITION;
    REC_PIPE.IFRS_POSITION_AGGR:=REC.IFRS_POSITION_AGGR;
    REC_PIPE.IMP_DT:=REC.IMP_DT;
    REC_PIPE.IMP_METHOD_CD:=REC.IMP_METHOD_CD;
    REC_PIPE.INT_COMPEN_CD:=REC.INT_COMPEN_CD;
    REC_PIPE.INTER_IND:=REC.INTER_IND;
    REC_PIPE.INTERCO_IND:=REC.INTERCO_IND;
    REC_PIPE.INTERCO_IND_IFRS:=REC.INTERCO_IND_IFRS;
    REC_PIPE.LOAN_PUR_FAIR_VAL_ADJ_AMT:=REC.LOAN_PUR_FAIR_VAL_ADJ_AMT;
    REC_PIPE.LOCAL_GL_CD:=REC.LOCAL_GL_CD;
    REC_PIPE.LOWEST_LEVEL_UBR_ID:=REC.LOWEST_LEVEL_UBR_ID;
    REC_PIPE.LTD_FEES_TO_PRINCIPAL_AMT:=REC.LTD_FEES_TO_PRINCIPAL_AMT;
    REC_PIPE.MA_CD:=REC.MA_CD;
    REC_PIPE.MA_SPL_NOTE:=REC.MA_SPL_NOTE;
    REC_PIPE.MA_SRC_CD:=REC.MA_SRC_CD;
    REC_PIPE.MA_TYPE_CD:=REC.MA_TYPE_CD;
    REC_PIPE.MAN_LINK_ID:=REC.MAN_LINK_ID;
    REC_PIPE.MASTER_MA_CD:=REC.MASTER_MA_CD;
    REC_PIPE.MATURITY_DT:=REC.MATURITY_DT;
    REC_PIPE.MAX_OUT_AMT:=REC.MAX_OUT_AMT;
    REC_PIPE.MAX_OUT_CCY_CD:=REC.MAX_OUT_CCY_CD;
    REC_PIPE.MTM_AMT:=REC.MTM_AMT;
    REC_PIPE.MTM_AMT_IFRS:=REC.MTM_AMT_IFRS;
    REC_PIPE.MTM_CCY_CD:=REC.MTM_CCY_CD;
    REC_PIPE.MTM_CCY_CD_IFRS:=REC.MTM_CCY_CD_IFRS;
    REC_PIPE.NEW_RISK_PROVISION_AMT:=REC.NEW_RISK_PROVISION_AMT;
    REC_PIPE.NON_PERF_IND:=REC.NON_PERF_IND;
    REC_PIPE.NON_PERF_RCV_INT_AMT:=REC.NON_PERF_RCV_INT_AMT;
    REC_PIPE.OPT_TYPE_CD:=REC.OPT_TYPE_CD;
    REC_PIPE.ORIG_NOTIONAL_AMT:=REC.ORIG_NOTIONAL_AMT;
    REC_PIPE.ORIG_NOTIONAL_CCY_CD:=REC.ORIG_NOTIONAL_CCY_CD;
    REC_PIPE.ORIG_TRN_CD:=REC.ORIG_TRN_CD;
    REC_PIPE.ORIG_TRN_SRC_CD:=REC.ORIG_TRN_SRC_CD;
    REC_PIPE.OTHER_CUR_NOTIONAL_AMT:=REC.OTHER_CUR_NOTIONAL_AMT;
    REC_PIPE.OTHER_CUR_NOTIONAL_CCY_CD:=REC.OTHER_CUR_NOTIONAL_CCY_CD;
    REC_PIPE.OTHER_ORIG_NOTIONAL_AMT:=REC.OTHER_ORIG_NOTIONAL_AMT;
    REC_PIPE.OTHER_ORIG_NOTIONAL_CCY_CD:=REC.OTHER_ORIG_NOTIONAL_CCY_CD;
    REC_PIPE.OVERDUE_AMT:=REC.OVERDUE_AMT;
    REC_PIPE.OVERDUE_CCY_CD:=REC.OVERDUE_CCY_CD;
    REC_PIPE.OVERDUE_DAY_COUNT:=REC.OVERDUE_DAY_COUNT;
    REC_PIPE.PAY_CONTRACT_AMT:=REC.PAY_CONTRACT_AMT;
    REC_PIPE.PAY_CONTRACT_CCY_CD:=REC.PAY_CONTRACT_CCY_CD;
    REC_PIPE.PAY_FWD_AMT:=REC.PAY_FWD_AMT;
    REC_PIPE.PAY_FWD_CCY_CD:=REC.PAY_FWD_CCY_CD;
    REC_PIPE.PAY_INT_INTERVAL:=REC.PAY_INT_INTERVAL;
    REC_PIPE.PAY_INT_RATE:=REC.PAY_INT_RATE;
    REC_PIPE.PAY_INT_RATE_TYPE_CD:=REC.PAY_INT_RATE_TYPE_CD;
    REC_PIPE.PAY_NEXT_FIX_CPN_DT:=REC.PAY_NEXT_FIX_CPN_DT;
    REC_PIPE.PAY_UL_SEC_TYPE_CD:=REC.PAY_UL_SEC_TYPE_CD;
    REC_PIPE.PAY_UL_ISS_CCDB_ID:=REC.PAY_UL_ISS_CCDB_ID;
    REC_PIPE.PAY_UL_MTM_AMT:=REC.PAY_UL_MTM_AMT;
    REC_PIPE.PAY_UL_MTM_CCY_CD:=REC.PAY_UL_MTM_CCY_CD;
    REC_PIPE.PAY_UL_SEC_CCY_CD:=REC.PAY_UL_SEC_CCY_CD;
    REC_PIPE.PAY_UL_SEC_CD:=REC.PAY_UL_SEC_CD;
    REC_PIPE.PCP_TRANSACTION_SK:=REC.PCP_TRANSACTION_SK;
    REC_PIPE.PROD_AREA_UBR_ID:=REC.PROD_AREA_UBR_ID;
    REC_PIPE.QUOTA_PART_AMT:=REC.QUOTA_PART_AMT;
    REC_PIPE.RCV_CONTRACT_AMT:=REC.RCV_CONTRACT_AMT;
    REC_PIPE.RCV_CONTRACT_CCY_CD:=REC.RCV_CONTRACT_CCY_CD;
    REC_PIPE.RCV_FWD_AMT:=REC.RCV_FWD_AMT;
    REC_PIPE.RCV_FWD_CCY_CD:=REC.RCV_FWD_CCY_CD;
    REC_PIPE.RCV_INT_RATE:=REC.RCV_INT_RATE;
    REC_PIPE.RCV_INT_RATE_TYPE_CD:=REC.RCV_INT_RATE_TYPE_CD;
    REC_PIPE.RCV_INT_INTERVAL:=REC.RCV_INT_INTERVAL;
    REC_PIPE.RCV_NEXT_FIX_CPN_DT:=REC.RCV_NEXT_FIX_CPN_DT;
    REC_PIPE.RCV_UL_ISS_CCDB_ID:=REC.RCV_UL_ISS_CCDB_ID;
    REC_PIPE.RCV_UL_MTM_AMT:=REC.RCV_UL_MTM_AMT;
    REC_PIPE.RCV_UL_MTM_CCY_CD:=REC.RCV_UL_MTM_CCY_CD;
    REC_PIPE.RCV_UL_SEC_CCY_CD:=REC.RCV_UL_SEC_CCY_CD;
    REC_PIPE.RCV_UL_SEC_CD:=REC.RCV_UL_SEC_CD;
    REC_PIPE.RCV_UL_SEC_TYPE_CD:=REC.RCV_UL_SEC_TYPE_CD;
    REC_PIPE.REC_SEC_AMT:=REC.REC_SEC_AMT;
    REC_PIPE.REC_SEC_CCY_CD:=REC.REC_SEC_CCY_CD;
    REC_PIPE.REC_UNSEC_AMT:=REC.REC_UNSEC_AMT;
    REC_PIPE.REC_UNSEC_CCY_CD:=REC.REC_UNSEC_CCY_CD;
    REC_PIPE.RECON_CD:=REC.RECON_CD;
    REC_PIPE.RECON_SRC_CD:=REC.RECON_SRC_CD;
    REC_PIPE.RECOV_AMT:=REC.RECOV_AMT;
    REC_PIPE.RECOVERY_FLAG:=REC.RECOVERY_FLAG;
    REC_PIPE.REGULATORY_NET_IND:=REC.REGULATORY_NET_IND;
    REC_PIPE.REL_RISK_PROVISION_AMT:=REC.REL_RISK_PROVISION_AMT;
    REC_PIPE.RESPONSIBLE_BRANCH_CCDB_ID:=REC.RESPONSIBLE_BRANCH_CCDB_ID;
    REC_PIPE.RESPONSIBLE_BRANCH_ID:=REC.RESPONSIBLE_BRANCH_ID;
    REC_PIPE.RESPONSIBLE_BRANCH_LOC_CD:=REC.RESPONSIBLE_BRANCH_LOC_CD;
    REC_PIPE.RESTRUCT_IND:=REC.RESTRUCT_IND;
    REC_PIPE.RISK_END_DT:=REC.RISK_END_DT;
    REC_PIPE.RISK_ENGINES_METHOD_USED:=REC.RISK_ENGINES_METHOD_USED;
    REC_PIPE.RISK_MITIGATING_PCT:=REC.RISK_MITIGATING_PCT;
    REC_PIPE.RISK_PROVISION_BAL_AMT:=REC.RISK_PROVISION_BAL_AMT;
    REC_PIPE.RISK_PROVISION_CCY_CD:=REC.RISK_PROVISION_CCY_CD;
    REC_PIPE.RISK_WRITE_OFF_AMT:=REC.RISK_WRITE_OFF_AMT;
    REC_PIPE.RISK_WRITE_OFF_LIFE_DT_AMT:=REC.RISK_WRITE_OFF_LIFE_DT_AMT;
    REC_PIPE.RT_BUS_AREA_UBR_ID:=REC.RT_BUS_AREA_UBR_ID;
    REC_PIPE.RT_CB_CCDB_ID:=REC.RT_CB_CCDB_ID;
    REC_PIPE.RT_COV_CD:=REC.RT_COV_CD;
    REC_PIPE.RT_COV_SRC_CD:=REC.RT_COV_SRC_CD;
    REC_PIPE.RT_COV_TYPE_CD:=REC.RT_COV_TYPE_CD;
    REC_PIPE.RT_COV_TYPE_CD_IFRS:=REC.RT_COV_TYPE_CD_IFRS;
    REC_PIPE.RT_TYPE_CD:=REC.RT_TYPE_CD;
    REC_PIPE.RT_TYPE_CD_IFRS:=REC.RT_TYPE_CD_IFRS;
    REC_PIPE.SENIORITY:=REC.SENIORITY;
    REC_PIPE.SETTLE_STATUS_ID:=REC.SETTLE_STATUS_ID;
    REC_PIPE.SETTLEMENT_CD:=REC.SETTLEMENT_CD;
    REC_PIPE.SETTLEMENT_DT:=REC.SETTLEMENT_DT;
    REC_PIPE.SOX_REF:=REC.SOX_REF;
    REC_PIPE.SPE_IND:=REC.SPE_IND;
    REC_PIPE.SPL_TREAT_IND_IFRS:=REC.SPL_TREAT_IND_IFRS;
    REC_PIPE.SRC_PROD_TYPE_CD:=REC.SRC_PROD_TYPE_CD;
    REC_PIPE.SRC_PROD_TYPE_CD_IFRS:=REC.SRC_PROD_TYPE_CD_IFRS;
    REC_PIPE.SRC_PROXY_UBR_CD:=REC.SRC_PROXY_UBR_CD;
    REC_PIPE.START_DT:=REC.START_DT;
    REC_PIPE.STRATEGIC_LEND_IND:=REC.STRATEGIC_LEND_IND;
    REC_PIPE.STRIKE_FWD_FUT_PRICE_AMT:=REC.STRIKE_FWD_FUT_PRICE_AMT;
    REC_PIPE.STRIKE_FWD_FUT_PRICE_CCY_CD:=REC.STRIKE_FWD_FUT_PRICE_CCY_CD;
    REC_PIPE.TERMINATION_LOAN_DT:=REC.TERMINATION_LOAN_DT;
    REC_PIPE.TRAD_ASSET_CD_IFRS:=REC.TRAD_ASSET_CD_IFRS;
    REC_PIPE.TRADE_DT:=REC.TRADE_DT;
    REC_PIPE.TRADE_ENTITY_CCDB_ID:=REC.TRADE_ENTITY_CCDB_ID;
    REC_PIPE.TRADE_ENTITY_LOC_CD:=REC.TRADE_ENTITY_LOC_CD;
    REC_PIPE.TRADE_RELATED_IND:=REC.TRADE_RELATED_IND;
    REC_PIPE.TRADE_STATUS:=REC.TRADE_STATUS;
    REC_PIPE.TRADING_ENTITY_LOC_CD:=REC.TRADING_ENTITY_LOC_CD;
    REC_PIPE.TRAN_MATCH_CD:=REC.TRAN_MATCH_CD;
    REC_PIPE.TRN_CD:=REC.TRN_CD;
    REC_PIPE.TRN_LINK_CD:=REC.TRN_LINK_CD;
    REC_PIPE.TRN_SRC_CD:=REC.TRN_SRC_CD;
    REC_PIPE.TRN_TYPE_CD:=REC.TRN_TYPE_CD;
    REC_PIPE.TRN_VERSION:=REC.TRN_VERSION;
    REC_PIPE.UL_DELIVERY_DT:=REC.UL_DELIVERY_DT;
    REC_PIPE.UL_MATURITY_DT:=REC.UL_MATURITY_DT;
    REC_PIPE.UNDLY_ISS_NAME:=REC.UNDLY_ISS_NAME;
    REC_PIPE.UNEARNED_INCOME_CCY_CD:=REC.UNEARNED_INCOME_CCY_CD;
    REC_PIPE.UNEARNED_INCOME_DIS_LOAN_AMT:=REC.UNEARNED_INCOME_DIS_LOAN_AMT;
    REC_PIPE.US_GAAP_POSITION_AGGR:=REC.US_GAAP_POSITION_AGGR;
    REC_PIPE.VALUATION_DT:=REC.VALUATION_DT;
    REC_PIPE.XCHG_CTRY_CD:=REC.XCHG_CTRY_CD;
    REC_PIPE.YEAR01_NOTIONAL_AMT:=REC.YEAR01_NOTIONAL_AMT;
    REC_PIPE.YEAR02_NOTIONAL_AMT:=REC.YEAR02_NOTIONAL_AMT;
    REC_PIPE.YEAR03_NOTIONAL_AMT:=REC.YEAR03_NOTIONAL_AMT;
    REC_PIPE.YEAR04_NOTIONAL_AMT:=REC.YEAR04_NOTIONAL_AMT;
    REC_PIPE.YEAR05_NOTIONAL_AMT:=REC.YEAR05_NOTIONAL_AMT;
    REC_PIPE.YEAR06_NOTIONAL_AMT:=REC.YEAR06_NOTIONAL_AMT;
    REC_PIPE.YEAR07_NOTIONAL_AMT:=REC.YEAR07_NOTIONAL_AMT;
    REC_PIPE.YEAR08_NOTIONAL_AMT:=REC.YEAR08_NOTIONAL_AMT;
    REC_PIPE.YEAR09_NOTIONAL_AMT:=REC.YEAR09_NOTIONAL_AMT;
    REC_PIPE.YEAR10_NOTIONAL_AMT:=REC.YEAR10_NOTIONAL_AMT;
    REC_PIPE.YTD_PL_TRADE_AMT:=REC.YTD_PL_TRADE_AMT;
    REC_PIPE.FCL_VALIDATED_IND:=REC.FCL_VALIDATED_IND;
    REC_PIPE.FCL_EXCLUDED_IND:=REC.FCL_EXCLUDED_IND;
    REC_PIPE.FCL_SOURCE_SYSTEM:=REC.FCL_SOURCE_SYSTEM;
    REC_PIPE.FCL_CREATE_TIMESTAMP:=REC.FCL_CREATE_TIMESTAMP;
    REC_PIPE.FCL_UPDATE_TIMESTAMP:=REC.FCL_UPDATE_TIMESTAMP;
    REC_PIPE.FCL_TRADE_LOAD_DATE:=REC.FCL_TRADE_LOAD_DATE;
    REC_PIPE.FCL_VALUATION_LOAD_DATE:=REC.FCL_VALUATION_LOAD_DATE;
    REC_PIPE.FCL_MATRIX_TRADE_TYPE:=REC.FCL_MATRIX_TRADE_TYPE;
    REC_PIPE.PCP_GCDS_REVISION:=REC.PCP_GCDS_REVISION;
    REC_PIPE.FCL_UTP_BOOK_CD:=REC.FCL_UTP_BOOK_CD;
    REC_PIPE.FCL_MIS_RISK_BOOK_CD:=REC.FCL_MIS_RISK_BOOK_CD;
    REC_PIPE.ALD_STATUS:=REC.ALD_STATUS;
    REC_PIPE.FCAT_OPERATION:=REC.FCAT_OPERATION;
    REC_PIPE.INTERNAL_INTRA_IND_IFRS:=REC.INTERNAL_INTRA_IND_IFRS;
    REC_PIPE.FCL_USAGE_IND:=REC.FCL_USAGE_IND;
    REC_PIPE.QUICK:=REC.QUICK;
    REC_PIPE.SEDOL:=REC.SEDOL;
    REC_PIPE.CUSIP:=REC.CUSIP;
    REC_PIPE.ISIN:=REC.ISIN;
    REC_PIPE.QUANTITY:=REC.QUANTITY;
    REC_PIPE.RIC:=REC.RIC;
    REC_PIPE.DESCRIPTION:=REC.DESCRIPTION;
    REC_PIPE.RESET_DATE:=REC.RESET_DATE;
    REC_PIPE.PRICE:=REC.PRICE;
    REC_PIPE.EXCHANGERATE:=REC.EXCHANGERATE;
    REC_PIPE.PARA_PROD_TYPE_ID:=REC.PARA_PROD_TYPE_ID;
    REC_PIPE.EFF_INT_RATE:=REC.EFF_INT_RATE;
    REC_PIPE.ORIG_FEE_UNREALISED_AMT:=REC.ORIG_FEE_UNREALISED_AMT;
    REC_PIPE.ASSET_TYPE:=REC.ASSET_TYPE;
    REC_PIPE.IMP_IND_IFRS:=REC.IMP_IND_IFRS;
    REC_PIPE.NOT_AMT_REDEMPTION_TO_1YR:=REC.NOT_AMT_REDEMPTION_TO_1YR;
    REC_PIPE.NOT_AMT_REDEMPTION_TO_5YR:=REC.NOT_AMT_REDEMPTION_TO_5YR;
    REC_PIPE.NOT_AMT_REDEMPTION_OVER_5YR:=REC.NOT_AMT_REDEMPTION_OVER_5YR;
    REC_PIPE.CASH_LTD:=REC.CASH_LTD;
    REC_PIPE.CASH_LTD_CCY:=REC.CASH_LTD_CCY;
    REC_PIPE.FCL_FACILITY_SK:=REC.FCL_FACILITY_SK;
    REC_PIPE.DILUTION_RISK_CRITERIA:=REC.DILUTION_RISK_CRITERIA;
    REC_PIPE.FCL_RMS_RUNID:=REC.FCL_RMS_RUNID;
    REC_PIPE.BSTYPE:=REC.BSTYPE;
    REC_PIPE.YTD_ACCR_DIV:=REC.YTD_ACCR_DIV;
    REC_PIPE.YTD_DIV:=REC.YTD_DIV;
    REC_PIPE.ACC_ADJ_YTD:=REC.ACC_ADJ_YTD;
    REC_PIPE.FV_RLZD_PL:=REC.FV_RLZD_PL;
    REC_PIPE.ITD_PL:=REC.ITD_PL;
    REC_PIPE.YTD_RLZD_PL:=REC.YTD_RLZD_PL;
    REC_PIPE.YTD_UNRLZD_PL:=REC.YTD_UNRLZD_PL;
    REC_PIPE.TRN_BAS_LGD:=REC.TRN_BAS_LGD;
    REC_PIPE.TRAD_YTD_RLZD_PL:=REC.TRAD_YTD_RLZD_PL;
    REC_PIPE.TRAD_YTD_UNRLZD_PL:=REC.TRAD_YTD_UNRLZD_PL;
    REC_PIPE.UNADJUSTED_PV:=REC.UNADJUSTED_PV;
    REC_PIPE.UNADJUSTED_REALISED_PL:=REC.UNADJUSTED_REALISED_PL;
    REC_PIPE.UNADJUSTED_UNREALISED_PL:=REC.UNADJUSTED_UNREALISED_PL;
    REC_PIPE.GRC_PROD_TYPE_ID:=REC.GRC_PROD_TYPE_ID;
    REC_PIPE.GRC_PROD_TYPE_ID_IFRS:=REC.GRC_PROD_TYPE_ID_IFRS;
    REC_PIPE.CUR_FEE:=REC.CUR_FEE;
    REC_PIPE.SEC_IND:=REC.SEC_IND;
    REC_PIPE.INVEST_ASSETS_PORT:=REC.INVEST_ASSETS_PORT;
    REC_PIPE.RISK_PROVISION_START_BAL_AMT:=REC.RISK_PROVISION_START_BAL_AMT;
    REC_PIPE.GEN_RISK_PRV_START_BAL_AMT:=REC.GEN_RISK_PRV_START_BAL_AMT;
    REC_PIPE.GEN_RISK_PROVISION_NEW:=REC.GEN_RISK_PROVISION_NEW;
    REC_PIPE.GEN_RISK_PROVISION_REL:=REC.GEN_RISK_PROVISION_REL;
    REC_PIPE.ACQUIRED_IND:=REC.ACQUIRED_IND;
    REC_PIPE.CRED_NET_MNA:=REC.CRED_NET_MNA;
    REC_PIPE.SEC_IFRS_VUE_ADJ:=REC.SEC_IFRS_VUE_ADJ;
    REC_PIPE.YEAR00_NOTIONAL_AMT:=REC.YEAR00_NOTIONAL_AMT;
    REC_PIPE.MAX_OVERDUE_DAYS:=REC.MAX_OVERDUE_DAYS;
    REC_PIPE.MIS_CD:=REC.MIS_CD;
    REC_PIPE.MULTINAME_POOL_SK:=REC.MULTINAME_POOL_SK;
    REC_PIPE.MULTINAME_CHUNK_REVISION_SK:=REC.MULTINAME_CHUNK_REVISION_SK;
    REC_PIPE.MTRX_CALC_CNTRL_EPE:=REC.MTRX_CALC_CNTRL_EPE;
    REC_PIPE.MTRX_CALC_CNTRL_PFE:=REC.MTRX_CALC_CNTRL_PFE;
    REC_PIPE.FCL_TE_VALIDATED_IND:=REC.FCL_TE_VALIDATED_IND;
    REC_PIPE.PCP_TE_DYNAMIC_KEY:=REC.PCP_TE_DYNAMIC_KEY;
    REC_PIPE.AVG_COST:=REC.AVG_COST;
    REC_PIPE.SEC_LONG_NAME:=REC.SEC_LONG_NAME;
    REC_PIPE.ISS_CTRY_CD:=REC.ISS_CTRY_CD;
    REC_PIPE.RISK_REPORTING_UBR:=REC.RISK_REPORTING_UBR;
    REC_PIPE.RISK_COVERING_BRANCH_CCDB:=REC.RISK_COVERING_BRANCH_CCDB;
    REC_PIPE.CLEARING_STATUS:=REC.CLEARING_STATUS;
    REC_PIPE.FCL_CPTY_SK:=REC.FCL_CPTY_SK;
    REC_PIPE.STRGRP:=REC.STRGRP;
    REC_PIPE.OPEN_ENDED_FLAG:=REC.OPEN_ENDED_FLAG;
    REC_PIPE.AVAILABILITY_IND:=REC.AVAILABILITY_IND;
    REC_PIPE.ESCRW_FLG:=REC.ESCRW_FLG;
    REC_PIPE.ESTABLISHED_RELP_FLG:=REC.ESTABLISHED_RELP_FLG;
    REC_PIPE.GOV_GTY_AMT:=REC.GOV_GTY_AMT;
    REC_PIPE.BUBA_CTRY_ID:=REC.BUBA_CTRY_ID;
    REC_PIPE.GOV_GTY_CTRY_CD:=REC.GOV_GTY_CTRY_CD;
    REC_PIPE.INTERNET_DPST_FLG:=REC.INTERNET_DPST_FLG;
    REC_PIPE.LAST_ACTIVITY_DT:=REC.LAST_ACTIVITY_DT;
    REC_PIPE.NOTICE_PERIOD_QTY:=REC.NOTICE_PERIOD_QTY;
    REC_PIPE.OPR_RELP_FLG:=REC.OPR_RELP_FLG;
    REC_PIPE.SIG_WD_PENALTY_FLG:=REC.SIG_WD_PENALTY_FLG;
    REC_PIPE.TRN_ACT_FLG:=REC.TRN_ACT_FLG;
    PIPE ROW(REC_PIPE);
    END LOOP;
    the Stored procedre returns REC_PIPE.
    I thins this could be not efficient enough...
    What do you think?

  • Best way to store big amount of data

    Hi, i need to store a big amount of data, written in a txt its size is almost 12 mg, anyway it depends on the computer it runs, beacause what i want to store is all the shared files in a computer.
    Which is the best way to store it? Array string? textfile? List? I don�t need the data after the app close.
    Thanks

    Well, then which is the best solution? LinkedList or
    Tree? i only need to store the full path.
    What i didn�t say, my fail, is that i need to search
    for a file name once i have stored them...For searching, LinkedList will be very slow if it's very large. I think the same is true of javax.swing .tree.DefaultTreeModel, which is the JDK's only tree implementation. I don't know what Jakarta-collections has - it's possible they have a tree that offers fast searching. If you want to stick to the standard Java libraries, you'll want a Set for fast searching. TreeSet keeps the entries in sorted order. If you also need to display them as a tree, you can keep them in both a Set and a tree. If you don't have enough memory to do that, then displaying the whole tree isn't going to be useful to the user anyway, so rethink your goal.

  • Handling Huge Amount of data in Browser

    Some information regarding large data handling in Web Browser. Browser data will be downloaded to the
    cache of local machine. So when the browser needs data to be downloaded in terms of MBs and GBs, how can we
    handle that ?
    requirement is as mentioned below.
    A performance monitoring application is collecting performance data of a system every 30 seconds.
    The size of the data collected can be around 10 KB for each interval and it is logged to a database. If this application
    runs for one day, the statistical data size will be around 30 MB (28.8 MB) . If it runs for one week, the data size will be
    210 MB. There is no limitation on the number of days from the software perspective.
    User needs to see this statistical data in the browser. We are not sure if this huge amount of data transfer to the
    browser in one instance is feasible. The user should be able to get the overall picture of the logged data for a
    particular period and if needed, should be able to drill down step by step to lesser ranges.
    For e.g, if the user queries for data between the dates 10'th Nov to 20'th Nov, the user expects to get an overall idea of
    the 11 days data. Note that it is not possible to show each 30 second data when showing 11 days data. So some logic
    has to be applied to present the 11 days data in a reasonably acceptable form. Then the user can go and select a
    particular date in the graph and the data for that day alone should be shown with a better granularity than the overall
    graph.
    Note: The applet may not be a signed applet.

    How do you download gigabytes of data to a browser? The answer is simple. You don't. A data analysis package like the one you describe should run on the server and send the requested summary views to the browser.

  • Best way of handling large amounts of data movement

    Hi
    I like to know what is the best way to handle data in the following scenario
    1. We have to create Medical and Rx claims Tables for 36 months of data about 150 million records each - First month (month 1, 2, 3, 4, .......34, 35, 36)
    2. We have to add the DELTA of month two to the 36 month baseline. But the application requirement is ONLY 36 months, even though the current size is 37 months.
    3 Similarly in the 3rd month we will have 38 months, 4th month will have 39 months.
    4. At the end of 4th month - how can I delete the First three months of data from Claim files without affecting the performance which is a 24X7 Online system.
    5. Is there a way to create Partitions of 3 months each and that can be deleted - Delete Partition number 1, If this is possible, then what kind of maintenance activity needs to be done after deleting partition.
    6. Is there any better way of doing the above scenario. What other options do I have.
    7 My goal is to eliminate the initial months data from system as the requirement is ONLY 36 months data.
    Thanks in advance for your suggestion
    sekhar

    Hi,
    You should use table partitioning to keep your data on monthly partitions. Serach on table partitioning for detailed examples.
    Regards

  • Will BW handle this amount of data?

    Connected to a banking solution, we are debating whether to use SAP BW to store transaction history for all of the customers. The bank needs to have access to a customers transaction history quite fast. We are talking about approx 1 million customers, 6.1 million cases and 80 million accounting movements from SAP Banking module.
    My question is therefore, is it advicable to invest in a BW solution for this or is this just too much data for BW to provide a fast response time for customer services queries?
    Thanks and best regards
    Jon Christopher

    I think you need to clarify.  Couple of questions -
    Are you looking to build a customer service application on top of the BW where users are frequently going to run queries for individual customers?  If these users are fron line users in a branch or call center, you'll need very good performance.
    Do you currently have R3 implemented and BW is already installed or are you looking at buying BW as a standalone data warehouse product?  Use of BW might make sense if you already have it installed, but I don't think I would recommend BW as a standalone datawarehouse product if your data sources are not R3.
    Performance of BW queries is going to be dependent on the system configuration and the data model you have. So if you are willing to spend on the appropriate hardware and performance tuning consulting if necessary, you can probably achieve require performance.

  • Logic loses big amount of data

    Has anyone experienced this:
    I write an arrangement in score and all of a sudden.... all key changes, double bar lines, tempo changes (global things) just dissapear.
    When I check the undo list I don't see any operation I have done that could have caused this.
    The only way of rescuing the situation is to open an earlier backup file and copy those data back from there.
    Even Logic 7 did this now and then.
    This is too frequent to ignore.
    Any clues Luggers?
    Michael

    You're right, this is a particularly nasty bug in Logic 7. It's perturbing to see a report that L8 does this too.
    OK... You say "all of a sudden"...things disappeared. What action on your part directly preceded this? Or were you just looking at the screen and things went bye-bye?
    And by chance are you working on a Logic 7 song that you imported into L8?

  • Socket - problem receiving big amount of data

    I have a client/server system and want to exchange xml data between them over a socket connection. I use a thread to read the received data, that look like this:
    BufferedReader in  = new BufferedReader ( new InputStreamReader ( socket.getInputStream() ) );
    while ( true )
      while ( in.ready() )
        System.out.println ( "received data: " + in.readLine() );
    }I'm having the problem, that the xml message is sometimes received incomplete, i.e. only portions of it are received, or more that one message can be read from the socket's input stream. I've tried so increase the socket's receive buffer size, but this doesn't help.
    I would expect, that the input stream notifies "ready", when the complete message was received, i.e. the last TCP packet was received. Is it possible to implement it thatway?

    The Socket class should be responsible for
    the TCP packets, e.g. request lost packets, sort the
    packets in the right order and deliver the
    complete message to the higher application
    level.The Socket class is not responsible for packets, the operating system TCP/IP stack is.
    Where you go wrong is assuming that TCP/IP delivers messages. It doesn't. It delivers a byte stream. "Messages" can be delivered in chunks of any size; it's at the mercy of all the routers etc in the network.
    It is the responsibility of the application to chop up the incoming bytes into messages if messages are desired. E.g. send a length first and then the message, or delimit messages somehow (e.g. with a new line character.)
    And don't use in.ready(), it will bite you in the azz one way or another.

  • JSP and large amounts of data

    Hello fellow Java fans
    First, let me point out that I'm a big Java and Linux fan, but somehow I ended up working with .NET and Microsoft.
    Right now my software development team is working on a web tool for a very important microchips manufacturer company. This tool handles big amounts of data; some of our online reports generates more that 100.000 rows which needs to be displayed on a web client such as Internet Explorer.
    We make use of Infragistics, which is a set of controls for .NET. Infragistics allows me to load data fetched from a database on a control they call UltraWebGrid.
    Our problem comes up when we load large amounts of data on the UltraWebGrid, sometimes we have to load 100.000+ rows; during this loading our IIS server memory gets killed and could take up to 5 minutes for the server to end processing and display the 100.000+ row report. We already proved the database server (SQL Server) is not the problem, our problem is the IIS web server.
    Our team is now considering migrating this web tool to Java and JSP. Can you all help me with some links, information, or past experiences you all have had with loading and displaying large amounts of data like the ones we handle on JSP? Help will be greatly appreciated.

    Who in the world actually looks at a 100,000 row report?
    Anyway if I were you and I had to do it because some clueless management person decided it was a good idea... I would write a program in something that once a day, week, year or whatever your time period produced the report (in maybe a PDF fashion but you could do it in HTML if you really must have it that way) and have it as a static file that you link to from your app.
    Then the user will have to just wait while it downloads but the webserver or web applications server will not be bogged down trying to produce that monstrosity.

  • Problem retrieving large amount of data!

    Hi,
    I'm currently working with a database application accessing very big amount of data. One query could result in 500 000+ hits!
    I would like to present this data in a JTable.
    When the query is executed, I create a two-dimensional array and store the data in it. The problem is that I get OutOfMemoryError when I reach the 150 000th row and add it to the array.
    I've looked into the design pattern "Value List Handler" and it seems it could be of use. But still I need to populate an array with all the data, and then I get the error.
    Is there somehow I could query the database, populate part of the data in a smaller array, use the "Value List Handler" pattern to access small portions of the complete resultset?
    Another problem is that the user wants ability to sort asc/desc by clicking columnheaders in the JTable. The I need to access all data in that table to make it be sorted right. Could re-query the database with a "ORDER BY <column> ASC" and use a modification of the value list handler pattern?
    I'm a bit confused, please help!
    Kind regards, Andreas

    The only chance that I you have: only select as many rows as you display on the screen. When the user hits "next page" retrieve the next rows.
    You might be able to do this with a scrollable resultset, but with that you are left to the mercy of the JDBC driver concerning memory management.
    So you need to think about a solution where you issue a new SELECT narrowing down the result set depending on the first/last row displayed
    Search this forum for pagewise navigation this question gets asked about 5 times a week (mostly in conjunction with web applications, but the logic behind it, should be the same).
    Tailoring your TableModel might be tricky as well.
    Thomas

  • How to deal with large amount of data

    Hi folks,
    I have a web page that needs 3 maps to serve. Each map will hold about 150,000 entries, and all of them will use about 100MB total. For some users with lots of data (e.g. 1,000,0000 entries), it may use up to 1GB of memory. If few of these high-volume users log in at the same time, it can bring the server down. The data is from the files, I cannot read it on demand because it will be too slow. Loading the data to maps offers me very good performance, but it does not scale. I am thinking to serialize the maps and deserialize them when I need. Is it my only option?
    Thanks in advance!

    JoachimSauer wrote:
    I don't buy the "too slow" argument.
    I'd put the data into a database. They are built to handle that amount of data.++

  • Freeze when writing large amount of data to iPod through USB

    I used to take backups of my PowerBook to my 60G iPod video. Backups are taken with tar in terminal directly to mounted iPod volume.
    Now, every time I try to write a big amount of data to iPod (from MacBook Pro), the whole system freezes (mouse cursor moves, but nothing else can be done). When the USB-cable is pulled off, the system recovers and acts as it should. This problem happens every time a large amount of data is written to iPod.
    The same iPod works perfectly (when backupping) in PowerBook and small amounts of data can be easily written to it (in MacBook Pro) without problems.
    Does anyone else have the same problem? Any ideas why is this and how to resolve the issue?
    MacBook Pro, 2.0Ghz, 100GB 7200RPM, 1GB Ram   Mac OS X (10.4.5)   IPod Video 60G connected through USB

    Ex PC user...never had a problem.
    Got a MacBook Pro last week...having the same issues...and this is now with an exchanged machine!
    I've read elsewhere that it's something to do with the USB timing out. And if you get a new USB port and attach it (and it's powered separately), it should work. Kind of a bummer, but, those folks who tried it say it works.
    Me, I can upload to Ipod piecemeal, manually...but even then, it sometimes freezes.
    The good news is that once the Ipod is loaded, the problem shouldnt' happen. It's the large amounts of data.
    Apple should DEFINITELY fix this though. Unbelievable.
    MacBook Pro 2.0   Mac OS X (10.4.6)  

  • Mail for large amount of data

    Hey guys,
    I just switched to Mac (and I love it ;)) and now I am searching for a mail program to host my office pop mail account. I get normally around 5000 messages / month, totalling over 3GB, mostly pictures.
    Can Mail handle the amount of data, or will it corrupt my inbox?
    Am I asking because under Windows, OE couldn't handle the amount of data, neither could Thunderbird before the update to 1.5.

    Hi Ernie,
    Thanks for your ideas.
    more below:
    Mick,
    The only place I know for sure that discusses this,
    can be found at:
    http://docs.info.apple.com/article.html?artnum=25812
    Yeah, I read this when I had a couple In Boxes blow a year or two ago. Sadly, it wasn't really up to date and I'd seen no warning or otherwise prior that this could happen. These days I try to keep the IN BOX as slim and trim as possible. Just wish I knew what the real specs were on how much is too much?
    Other than an individual xxxx.mbox folder, I know of
    no limit that would apply, either to the total size
    of the Mail folder, nor to number of messages.
    What's the limit to an individual xxxx.mbox folder?
    Having said that, I never rely solely upon the Mail
    files to archive important attachments. I also keep
    my Mail folder active on more than one Mac, and thus
    on more than one hard drive. I do not, however, sync
    general Sent messages between the various Macs, on
    the theory that other backup practices protect any
    information that I would ever attach to send. Of
    course some On My Mac mailboxes are archiving both
    received and sent messages for certain subject
    areas.
    Sent messages are important to most of the businesses I support so they all need to be there. Nice thought about archiving attachments. I wonder if it's possible to create a rule or automator flow that would do that? Any attachment over 500k would get archived and then deleted from the message. I wonder if that's possible and simple enough...
    My own Mail folder is in excess of 5 GB, and my
    largest individual xxxx.mbox barely exceeds 1 GB.
    Is each folder created "On My Mac" one mbox?
    Again, she has in excess of 20GB and I'm worried about all that weight in the program. Makes me long for the good old days of Eudora which could handle massive sizes like that.
    thanks, Ernie.
    cheers,
    Mick

  • What is the MAX amount of DATA for JDBC?

    Hello,
    Did anybody had any expericence with how jdbc works with large amount of data SAY
    > 1/2 GIG or more?
    Any help apprecieated
    Thanks.
    Paul

    Always depends on the driver implementation. Each
    driver/database combination deals with massiveamounts
    of data in a single transaction differently. Canyou
    be more specific? In general Most Drivers of
    commercial grade quality and/or shipped with the
    top
    5
    RDBMS have pretty solid implementations limitedsoley
    by network bandwidth to carry this info. A good
    developer however will not rely on the driver to
    manage this much info and will develop a scheme
    to
    minimize the impact of such large data transferson
    an
    unsuspecting database or client.
    LThis is what I thought. I just need moreinformation
    if for example I have a JAVA process which feedsdata
    from RDMBS and the amount of data the processreceives
    SAY 2GIG My guess this all get in "ResultSet"object
    and thus all feeds in VM( memory) so I have to
    designing my process SUCH that it takes only asmall
    sample of data and iterate this until I get all the
    information from my query. Ex.
    select * from test where data between 1 and 2000000;I
    have to brake this down by "DAYS"
    so my code will be something like
    for (int i=0 ;i<2000000 ;i++) {
    String select=null;
    select="select * from test where data="+i+;
    do( something else)
    Am I close far? Is there another way that may beJDBC
    driver handless this behind scenes?
    Well personally I think you should be structuring your
    SQL better than that. Making 2000000 calls over a
    network in a for loop is suicidal..and lets not forget
    how something like that will block your application.
    First you can do something like "select * from test
    where data <= 2000000" in one shot. Now you get the
    whole set and you can scroll through and do your
    operation ( of course this is slow too, but its much
    better than running the query 2000000 times ).
    the other thing you have to consider is : is it
    necessary to really block during an operation like
    this? Can it be done in another thread? a daemon
    process even? Some service which can be polled for
    results when needed?
    This is a tricky example and I'd be interested to know
    more about what you are trying to do.
    When you were talking about large data I was under the
    impression of something like a blob, clob, or any sort
    of single query with 2gb of data in the result.
    Can you break up your query? Doing things in batch in
    an alternative thread can yield to other processes
    perhaps?
    Loh, and also: Can the Database itself take care of this business? Stored Procedures and Internal Functions are all built to handle massive amounts of data with minimal impact. Can you leverage the technology within the Database to accomplish your goal? Sometimes a well written procedure can save you tons of work in your business logic tier.
    The right tools for the right job..
    L

  • How do you handle and distribute huge amounts of data?

    I've got about 315 channels and will be collecting data at about 1300 Hz.  Data collection will last about 3-4 minutes.  I'm at a loss on how to distribute the data to researchers for evaluation.  These researchers will not have LabView or any other NI software to use.  I've tried the TDM stuff, and the add-in for excel 2007 but the files are just too large for excel to handle in a timely manner.  It took just over three hours to open one of my test files.  Anybody have any suggestions on how to manage and distribute files this large?

    Frank Rizzo wrote:
    Well I'm sitting in Cleveland Ohio right now so I am going to have to disregard your message.....hahhahaha...
    I'll be curious to see if Jamal Lewis's yardage improves for you guys this year after having some definite dropoffs with us the last couple.
    Though now that he's not playing against you,  your defense's numbers against the run should improve.
    I'd be surprised if you could get that data to the researchers at all using Excel.  It has a limit of 256 columns and 65536 rows.  If you had a column per channel and a row per data point, you'd be talking 315 columns and 312,000 rows for 4 minutes of data.  I guess you could always break it up into several files being sure to leave some rows and columns to give them a chance to do some data calculation.
    Out of curiosity, I created a spread sheet where every cell was filled with a 1.  It took a good 30 seconds to save and was over 100 MB.  That was probably about 1/10 of the amount of data your dealing with.  And going back to my earlier calculations, I would guess that as a text file, that much data would need about 10 bytes per value thus getting you to about 1 GB in text files.
    I would ask them what kind of software they will use to analyze these files and what format they would prefer it in.  There are really only 2 ways to get it to them, either ASCII text file which could be very large, but would be the most flexible to manipulate.  Or a binary file, which would be smaller, but there could be a conflict if they don't interpret the file the same way you write it out.
    I haven't used a TDM file add-in for Excel before.  So I don't know how powerful it is, or if there is a lot of overhead involved that would make it take 3 hours.  What if you create multiple TDM files?  Let's say you break it down by bunches of channels and only 20-30 seconds of data at a time.  Something that would keep the size of the data array within the row and column limits of Excel.  (I am guessing about 10 files).  Would each file go into Excel so much faster with that add-in that even if you need to do it 10 times, it would still be far quicker than one large file?  I am wondering if the add-on is spending a lot of time figuring out how to break down the large dataset on its own.
    The only other question.  Have you tried the TDMS files as opposed to the TDM files?  I know they are an upgrade and are supposed to work better with streaming data.  I wonder if they have any improvements for larger datasets.

Maybe you are looking for