Logic loses big amount of data

Has anyone experienced this:
I write an arrangement in score and all of a sudden.... all key changes, double bar lines, tempo changes (global things) just dissapear.
When I check the undo list I don't see any operation I have done that could have caused this.
The only way of rescuing the situation is to open an earlier backup file and copy those data back from there.
Even Logic 7 did this now and then.
This is too frequent to ignore.
Any clues Luggers?
Michael

You're right, this is a particularly nasty bug in Logic 7. It's perturbing to see a report that L8 does this too.
OK... You say "all of a sudden"...things disappeared. What action on your part directly preceded this? Or were you just looking at the screen and things went bye-bye?
And by chance are you working on a Logic 7 song that you imported into L8?

Similar Messages

  • How can I return a big amount of data from a stored procedure?

    How can I return a big amount of data from a stored procedure in a efficient way ?
    For example not using a cursor for going through all the rows and then assign the values to variables.
    thanks in advance!

    Let's see if I'm able to explain myself..
    I have a SQL query with..about 190.000 (in the best fo the the options)
    I have something like this in my stored procedure..
    FOR REC IN trn_adj_no
    LOOP
    REC_PIPE.INSTANCE_ID:=REC.INSTANCE_ID;
    REC_PIPE.PROCESS_ID:=REC.PROCESS_ID;
    REC_PIPE.ENTITY_TYPE:='TRANSACTION';
    REC_PIPE.CRES_FILENAME:=REC.CRES_FILENAME;
    REC_PIPE.CHUNK_REVISION_SK:=P_CHUNK_REVISION_SK;
    REC_PIPE.CHUNK_ID:=trim(p_chunk_id);
    REC_PIPE.FCL_SK:=REC.FCL_SK;
    REC_PIPE.FCL_TRADE_SK:=REC.FCL_TRADE_SK;
    REC_PIPE.FCL_VALUATION_SK:=REC.FCL_VALUATION_SK;
    REC_PIPE.FCL_BUSINESS_GROUP:=REC.FCL_BUSINESS_GROUP;
    REC_PIPE.ABS_TRN_CD:=REC.ABS_TRN_CD;
    REC_PIPE.ACC_INTEREST_IMP_LOAN_CCY_IFRS:=REC.ACC_INTEREST_IMP_LOAN_CCY_IFRS;
    REC_PIPE.ACC_INTEREST_IMP_LOAN_IFRS:=REC.ACC_INTEREST_IMP_LOAN_IFRS;
    REC_PIPE.ACCRUED_INT_AMT:=REC.ACCRUED_INT_AMT;
    REC_PIPE.ACCRUED_INT_CCY_CD:=REC.ACCRUED_INT_CCY_CD;
    REC_PIPE.AMORT_ID:=REC.AMORT_ID;
    REC_PIPE.AMORT_IND:=REC.AMORT_IND;
    REC_PIPE.ATI:=REC.ATI;
    REC_PIPE.ATI_2:=REC.ATI_2;
    REC_PIPE.ATI_2_AMT:=REC.ATI_2_AMT;
    REC_PIPE.ATI_2_CCY_CD:=REC.ATI_2_CCY_CD;
    REC_PIPE.ATI_AMT:=REC.ATI_AMT;
    REC_PIPE.ATI_CCY_CD:=REC.ATI_CCY_CD;
    REC_PIPE.AVG_MTH_DUE_AMT:=REC.AVG_MTH_DUE_AMT;
    REC_PIPE.AVG_MTH_DUE_CCY_CD:=REC.AVG_MTH_DUE_CCY_CD;
    REC_PIPE.AVG_YTD_DUE_AMT:=REC.AVG_YTD_DUE_AMT;
    REC_PIPE.AVG_YTD_DUE_CCY_CD:=REC.AVG_YTD_DUE_CCY_CD;
    REC_PIPE.BAL_SHEET_EXP_AMT:=REC.BAL_SHEET_EXP_AMT;
    REC_PIPE.BAL_SHEET_EXP_AMT_IFRS:=REC.BAL_SHEET_EXP_AMT_IFRS;
    REC_PIPE.BAL_SHEET_EXP_CCY_CD:=REC.BAL_SHEET_EXP_CCY_CD;
    REC_PIPE.BAL_SHEET_EXP_CCY_CD_IFRS:=REC.BAL_SHEET_EXP_CCY_CD_IFRS;
    REC_PIPE.BAL_SHEET_EXP_EUR_AMT:=REC.BAL_SHEET_EXP_EUR_AMT;
    REC_PIPE.BAL_SHEET_EXP_EUR_AMT_IFRS:=REC.BAL_SHEET_EXP_EUR_AMT_IFRS;
    REC_PIPE.BEN_CCDB_ID:=REC.BEN_CCDB_ID;
    REC_PIPE.BEN_CD:=REC.BEN_CD;
    REC_PIPE.BEN_SRC_CD:=REC.BEN_SRC_CD;
    REC_PIPE.BOOK_CD:=REC.BOOK_CD;
    REC_PIPE.BOOK_TYPE_CD:=REC.BOOK_TYPE_CD;
    REC_PIPE.BOOK_VAL_AMT:=REC.BOOK_VAL_AMT;
    REC_PIPE.BOOK_VAL_AMT_IFRS:=REC.BOOK_VAL_AMT_IFRS;
    REC_PIPE.BOOK_VAL_CCY_CD:=REC.BOOK_VAL_CCY_CD;
    REC_PIPE.BOOK_VAL_CCY_CD_IFRS:=REC.BOOK_VAL_CCY_CD_IFRS;
    REC_PIPE.BOOK_VAL_US_GAAP_AMT:=REC.BOOK_VAL_US_GAAP_AMT;
    REC_PIPE.BRANCH_ID:=REC.BRANCH_ID;
    REC_PIPE.BRANCH_LOC_CD:=REC.BRANCH_LOC_CD;
    REC_PIPE.BS_COMPEN_CD:=REC.BS_COMPEN_CD;
    REC_PIPE.BUS_AREA_UBR_ID:=REC.BUS_AREA_UBR_ID;
    REC_PIPE.CA_CD:=REC.CA_CD;
    REC_PIPE.CAD_RISK_WEIGHT_PCT:=REC.CAD_RISK_WEIGHT_PCT;
    REC_PIPE.CASH_SETTLE_IND:=REC.CASH_SETTLE_IND;
    REC_PIPE.CLS_FLG:=REC.CLS_FLG;
    REC_PIPE.COB_DT:=REC.COB_DT;
    REC_PIPE.COL_AGMENT_SRC_CD:=REC.COL_AGMENT_SRC_CD;
    REC_PIPE.CONSOLIDATION_PCT:=REC.CONSOLIDATION_PCT;
    REC_PIPE.CONTRACT_AMT:=REC.CONTRACT_AMT;
    REC_PIPE.CONTRACT_COUNT:=REC.CONTRACT_COUNT;
    REC_PIPE.COST_UNSEC_WORKOUT_AMT:=REC.COST_UNSEC_WORKOUT_AMT;
    REC_PIPE.CPTY_CCDB_ID:=REC.CPTY_CCDB_ID;
    REC_PIPE.CPTY_CD:=REC.CPTY_CD;
    REC_PIPE.CPTY_LONG_NAME:=REC.CPTY_LONG_NAME;
    REC_PIPE.CPTY_SRC_PROXY_UBR_ID:=REC.CPTY_SRC_PROXY_UBR_ID;
    REC_PIPE.CPTY_TYPE_CD:=REC.CPTY_TYPE_CD;
    REC_PIPE.CPTY_UBR_ID:=REC.CPTY_UBR_ID;
    REC_PIPE.CREDIT_LINE_NET_IND:=REC.CREDIT_LINE_NET_IND;
    REC_PIPE.CRES_SYS_ID:=REC.CRES_SYS_ID;
    REC_PIPE.CTRY_RISK_PROVISION_BAL_AMT:=REC.CTRY_RISK_PROVISION_BAL_AMT;
    REC_PIPE.CUR_NOTIONAL_AMT:=REC.CUR_NOTIONAL_AMT;
    REC_PIPE.CUR_NOTIONAL_CCY_CD:=REC.CUR_NOTIONAL_CCY_CD;
    REC_PIPE.CUR_NOTIONAL_CCY_CD_IFRS:=REC.CUR_NOTIONAL_CCY_CD_IFRS;
    REC_PIPE.CUR_NOTIONAL_AMT_IFRS:=REC.CUR_NOTIONAL_AMT_IFRS;
    REC_PIPE.DB_ENTITY_TRANSFER_ASSETS_CD:=REC.DB_ENTITY_TRANSFER_ASSETS_CD;
    REC_PIPE.DEAL_TYPE_CD:=REC.DEAL_TYPE_CD;
    REC_PIPE.DECOMPOSITION_IDENTIFIER:=REC.DECOMPOSITION_IDENTIFIER;
    REC_PIPE.DEF_LOAN_FEE_IFRS:=REC.DEF_LOAN_FEE_IFRS;
    REC_PIPE.DEF_LOAN_FEE_USGAAP:=REC.DEF_LOAN_FEE_USGAAP;
    REC_PIPE.DELIVERY_DT:=REC.DELIVERY_DT;
    REC_PIPE.DERIV_NOT_DT:=REC.DERIV_NOT_DT;
    REC_PIPE.DERIV_NOT_IND:=REC.DERIV_NOT_IND;
    REC_PIPE.DESK:=REC.DESK;
    REC_PIPE.DIVISION_UBR_ID:=REC.DIVISION_UBR_ID;
    REC_PIPE.ENTITY_CCDB_ID:=REC.ENTITY_CCDB_ID;
    REC_PIPE.FAC_CD:=REC.FAC_CD;
    REC_PIPE.FAC_LIMIT_CD:=REC.FAC_LIMIT_CD;
    REC_PIPE.FAC_LIMIT_SRC_CD:=REC.FAC_LIMIT_SRC_CD;
    REC_PIPE.FAC_SRC_CD:=REC.FAC_SRC_CD;
    REC_PIPE.FAIR_VAL_CD_IFRS:=REC.FAIR_VAL_CD_IFRS;
    REC_PIPE.FED_CLASS_CD:=REC.FED_CLASS_CD;
    REC_PIPE.FIRST_RISK_PROVISION_DT:=REC.FIRST_RISK_PROVISION_DT;
    REC_PIPE.FV_IMP_LOSS_CRED_CCY_IFRS:=REC.FV_IMP_LOSS_CRED_CCY_IFRS;
    REC_PIPE.FV_IMP_LOSS_CRED_IFRS:=REC.FV_IMP_LOSS_CRED_IFRS;
    REC_PIPE.FV_IMP_LOSS_CRED_YTD_CCY_IFRS:=REC.FV_IMP_LOSS_CRED_YTD_CCY_IFRS;
    REC_PIPE.FV_IMP_LOSS_CRED_YTD_IFRS:=REC.FV_IMP_LOSS_CRED_YTD_IFRS;
    REC_PIPE.GEN_RISK_PROVISION_BAL_AMT:=REC.GEN_RISK_PROVISION_BAL_AMT;
    REC_PIPE.GL_PROFIT_CENTRE:=REC.GL_PROFIT_CENTRE;
    REC_PIPE.GLOBAL_GL_CD:=REC.GLOBAL_GL_CD;
    REC_PIPE.IFRS_POSITION:=REC.IFRS_POSITION;
    REC_PIPE.IFRS_POSITION_AGGR:=REC.IFRS_POSITION_AGGR;
    REC_PIPE.IMP_DT:=REC.IMP_DT;
    REC_PIPE.IMP_METHOD_CD:=REC.IMP_METHOD_CD;
    REC_PIPE.INT_COMPEN_CD:=REC.INT_COMPEN_CD;
    REC_PIPE.INTER_IND:=REC.INTER_IND;
    REC_PIPE.INTERCO_IND:=REC.INTERCO_IND;
    REC_PIPE.INTERCO_IND_IFRS:=REC.INTERCO_IND_IFRS;
    REC_PIPE.LOAN_PUR_FAIR_VAL_ADJ_AMT:=REC.LOAN_PUR_FAIR_VAL_ADJ_AMT;
    REC_PIPE.LOCAL_GL_CD:=REC.LOCAL_GL_CD;
    REC_PIPE.LOWEST_LEVEL_UBR_ID:=REC.LOWEST_LEVEL_UBR_ID;
    REC_PIPE.LTD_FEES_TO_PRINCIPAL_AMT:=REC.LTD_FEES_TO_PRINCIPAL_AMT;
    REC_PIPE.MA_CD:=REC.MA_CD;
    REC_PIPE.MA_SPL_NOTE:=REC.MA_SPL_NOTE;
    REC_PIPE.MA_SRC_CD:=REC.MA_SRC_CD;
    REC_PIPE.MA_TYPE_CD:=REC.MA_TYPE_CD;
    REC_PIPE.MAN_LINK_ID:=REC.MAN_LINK_ID;
    REC_PIPE.MASTER_MA_CD:=REC.MASTER_MA_CD;
    REC_PIPE.MATURITY_DT:=REC.MATURITY_DT;
    REC_PIPE.MAX_OUT_AMT:=REC.MAX_OUT_AMT;
    REC_PIPE.MAX_OUT_CCY_CD:=REC.MAX_OUT_CCY_CD;
    REC_PIPE.MTM_AMT:=REC.MTM_AMT;
    REC_PIPE.MTM_AMT_IFRS:=REC.MTM_AMT_IFRS;
    REC_PIPE.MTM_CCY_CD:=REC.MTM_CCY_CD;
    REC_PIPE.MTM_CCY_CD_IFRS:=REC.MTM_CCY_CD_IFRS;
    REC_PIPE.NEW_RISK_PROVISION_AMT:=REC.NEW_RISK_PROVISION_AMT;
    REC_PIPE.NON_PERF_IND:=REC.NON_PERF_IND;
    REC_PIPE.NON_PERF_RCV_INT_AMT:=REC.NON_PERF_RCV_INT_AMT;
    REC_PIPE.OPT_TYPE_CD:=REC.OPT_TYPE_CD;
    REC_PIPE.ORIG_NOTIONAL_AMT:=REC.ORIG_NOTIONAL_AMT;
    REC_PIPE.ORIG_NOTIONAL_CCY_CD:=REC.ORIG_NOTIONAL_CCY_CD;
    REC_PIPE.ORIG_TRN_CD:=REC.ORIG_TRN_CD;
    REC_PIPE.ORIG_TRN_SRC_CD:=REC.ORIG_TRN_SRC_CD;
    REC_PIPE.OTHER_CUR_NOTIONAL_AMT:=REC.OTHER_CUR_NOTIONAL_AMT;
    REC_PIPE.OTHER_CUR_NOTIONAL_CCY_CD:=REC.OTHER_CUR_NOTIONAL_CCY_CD;
    REC_PIPE.OTHER_ORIG_NOTIONAL_AMT:=REC.OTHER_ORIG_NOTIONAL_AMT;
    REC_PIPE.OTHER_ORIG_NOTIONAL_CCY_CD:=REC.OTHER_ORIG_NOTIONAL_CCY_CD;
    REC_PIPE.OVERDUE_AMT:=REC.OVERDUE_AMT;
    REC_PIPE.OVERDUE_CCY_CD:=REC.OVERDUE_CCY_CD;
    REC_PIPE.OVERDUE_DAY_COUNT:=REC.OVERDUE_DAY_COUNT;
    REC_PIPE.PAY_CONTRACT_AMT:=REC.PAY_CONTRACT_AMT;
    REC_PIPE.PAY_CONTRACT_CCY_CD:=REC.PAY_CONTRACT_CCY_CD;
    REC_PIPE.PAY_FWD_AMT:=REC.PAY_FWD_AMT;
    REC_PIPE.PAY_FWD_CCY_CD:=REC.PAY_FWD_CCY_CD;
    REC_PIPE.PAY_INT_INTERVAL:=REC.PAY_INT_INTERVAL;
    REC_PIPE.PAY_INT_RATE:=REC.PAY_INT_RATE;
    REC_PIPE.PAY_INT_RATE_TYPE_CD:=REC.PAY_INT_RATE_TYPE_CD;
    REC_PIPE.PAY_NEXT_FIX_CPN_DT:=REC.PAY_NEXT_FIX_CPN_DT;
    REC_PIPE.PAY_UL_SEC_TYPE_CD:=REC.PAY_UL_SEC_TYPE_CD;
    REC_PIPE.PAY_UL_ISS_CCDB_ID:=REC.PAY_UL_ISS_CCDB_ID;
    REC_PIPE.PAY_UL_MTM_AMT:=REC.PAY_UL_MTM_AMT;
    REC_PIPE.PAY_UL_MTM_CCY_CD:=REC.PAY_UL_MTM_CCY_CD;
    REC_PIPE.PAY_UL_SEC_CCY_CD:=REC.PAY_UL_SEC_CCY_CD;
    REC_PIPE.PAY_UL_SEC_CD:=REC.PAY_UL_SEC_CD;
    REC_PIPE.PCP_TRANSACTION_SK:=REC.PCP_TRANSACTION_SK;
    REC_PIPE.PROD_AREA_UBR_ID:=REC.PROD_AREA_UBR_ID;
    REC_PIPE.QUOTA_PART_AMT:=REC.QUOTA_PART_AMT;
    REC_PIPE.RCV_CONTRACT_AMT:=REC.RCV_CONTRACT_AMT;
    REC_PIPE.RCV_CONTRACT_CCY_CD:=REC.RCV_CONTRACT_CCY_CD;
    REC_PIPE.RCV_FWD_AMT:=REC.RCV_FWD_AMT;
    REC_PIPE.RCV_FWD_CCY_CD:=REC.RCV_FWD_CCY_CD;
    REC_PIPE.RCV_INT_RATE:=REC.RCV_INT_RATE;
    REC_PIPE.RCV_INT_RATE_TYPE_CD:=REC.RCV_INT_RATE_TYPE_CD;
    REC_PIPE.RCV_INT_INTERVAL:=REC.RCV_INT_INTERVAL;
    REC_PIPE.RCV_NEXT_FIX_CPN_DT:=REC.RCV_NEXT_FIX_CPN_DT;
    REC_PIPE.RCV_UL_ISS_CCDB_ID:=REC.RCV_UL_ISS_CCDB_ID;
    REC_PIPE.RCV_UL_MTM_AMT:=REC.RCV_UL_MTM_AMT;
    REC_PIPE.RCV_UL_MTM_CCY_CD:=REC.RCV_UL_MTM_CCY_CD;
    REC_PIPE.RCV_UL_SEC_CCY_CD:=REC.RCV_UL_SEC_CCY_CD;
    REC_PIPE.RCV_UL_SEC_CD:=REC.RCV_UL_SEC_CD;
    REC_PIPE.RCV_UL_SEC_TYPE_CD:=REC.RCV_UL_SEC_TYPE_CD;
    REC_PIPE.REC_SEC_AMT:=REC.REC_SEC_AMT;
    REC_PIPE.REC_SEC_CCY_CD:=REC.REC_SEC_CCY_CD;
    REC_PIPE.REC_UNSEC_AMT:=REC.REC_UNSEC_AMT;
    REC_PIPE.REC_UNSEC_CCY_CD:=REC.REC_UNSEC_CCY_CD;
    REC_PIPE.RECON_CD:=REC.RECON_CD;
    REC_PIPE.RECON_SRC_CD:=REC.RECON_SRC_CD;
    REC_PIPE.RECOV_AMT:=REC.RECOV_AMT;
    REC_PIPE.RECOVERY_FLAG:=REC.RECOVERY_FLAG;
    REC_PIPE.REGULATORY_NET_IND:=REC.REGULATORY_NET_IND;
    REC_PIPE.REL_RISK_PROVISION_AMT:=REC.REL_RISK_PROVISION_AMT;
    REC_PIPE.RESPONSIBLE_BRANCH_CCDB_ID:=REC.RESPONSIBLE_BRANCH_CCDB_ID;
    REC_PIPE.RESPONSIBLE_BRANCH_ID:=REC.RESPONSIBLE_BRANCH_ID;
    REC_PIPE.RESPONSIBLE_BRANCH_LOC_CD:=REC.RESPONSIBLE_BRANCH_LOC_CD;
    REC_PIPE.RESTRUCT_IND:=REC.RESTRUCT_IND;
    REC_PIPE.RISK_END_DT:=REC.RISK_END_DT;
    REC_PIPE.RISK_ENGINES_METHOD_USED:=REC.RISK_ENGINES_METHOD_USED;
    REC_PIPE.RISK_MITIGATING_PCT:=REC.RISK_MITIGATING_PCT;
    REC_PIPE.RISK_PROVISION_BAL_AMT:=REC.RISK_PROVISION_BAL_AMT;
    REC_PIPE.RISK_PROVISION_CCY_CD:=REC.RISK_PROVISION_CCY_CD;
    REC_PIPE.RISK_WRITE_OFF_AMT:=REC.RISK_WRITE_OFF_AMT;
    REC_PIPE.RISK_WRITE_OFF_LIFE_DT_AMT:=REC.RISK_WRITE_OFF_LIFE_DT_AMT;
    REC_PIPE.RT_BUS_AREA_UBR_ID:=REC.RT_BUS_AREA_UBR_ID;
    REC_PIPE.RT_CB_CCDB_ID:=REC.RT_CB_CCDB_ID;
    REC_PIPE.RT_COV_CD:=REC.RT_COV_CD;
    REC_PIPE.RT_COV_SRC_CD:=REC.RT_COV_SRC_CD;
    REC_PIPE.RT_COV_TYPE_CD:=REC.RT_COV_TYPE_CD;
    REC_PIPE.RT_COV_TYPE_CD_IFRS:=REC.RT_COV_TYPE_CD_IFRS;
    REC_PIPE.RT_TYPE_CD:=REC.RT_TYPE_CD;
    REC_PIPE.RT_TYPE_CD_IFRS:=REC.RT_TYPE_CD_IFRS;
    REC_PIPE.SENIORITY:=REC.SENIORITY;
    REC_PIPE.SETTLE_STATUS_ID:=REC.SETTLE_STATUS_ID;
    REC_PIPE.SETTLEMENT_CD:=REC.SETTLEMENT_CD;
    REC_PIPE.SETTLEMENT_DT:=REC.SETTLEMENT_DT;
    REC_PIPE.SOX_REF:=REC.SOX_REF;
    REC_PIPE.SPE_IND:=REC.SPE_IND;
    REC_PIPE.SPL_TREAT_IND_IFRS:=REC.SPL_TREAT_IND_IFRS;
    REC_PIPE.SRC_PROD_TYPE_CD:=REC.SRC_PROD_TYPE_CD;
    REC_PIPE.SRC_PROD_TYPE_CD_IFRS:=REC.SRC_PROD_TYPE_CD_IFRS;
    REC_PIPE.SRC_PROXY_UBR_CD:=REC.SRC_PROXY_UBR_CD;
    REC_PIPE.START_DT:=REC.START_DT;
    REC_PIPE.STRATEGIC_LEND_IND:=REC.STRATEGIC_LEND_IND;
    REC_PIPE.STRIKE_FWD_FUT_PRICE_AMT:=REC.STRIKE_FWD_FUT_PRICE_AMT;
    REC_PIPE.STRIKE_FWD_FUT_PRICE_CCY_CD:=REC.STRIKE_FWD_FUT_PRICE_CCY_CD;
    REC_PIPE.TERMINATION_LOAN_DT:=REC.TERMINATION_LOAN_DT;
    REC_PIPE.TRAD_ASSET_CD_IFRS:=REC.TRAD_ASSET_CD_IFRS;
    REC_PIPE.TRADE_DT:=REC.TRADE_DT;
    REC_PIPE.TRADE_ENTITY_CCDB_ID:=REC.TRADE_ENTITY_CCDB_ID;
    REC_PIPE.TRADE_ENTITY_LOC_CD:=REC.TRADE_ENTITY_LOC_CD;
    REC_PIPE.TRADE_RELATED_IND:=REC.TRADE_RELATED_IND;
    REC_PIPE.TRADE_STATUS:=REC.TRADE_STATUS;
    REC_PIPE.TRADING_ENTITY_LOC_CD:=REC.TRADING_ENTITY_LOC_CD;
    REC_PIPE.TRAN_MATCH_CD:=REC.TRAN_MATCH_CD;
    REC_PIPE.TRN_CD:=REC.TRN_CD;
    REC_PIPE.TRN_LINK_CD:=REC.TRN_LINK_CD;
    REC_PIPE.TRN_SRC_CD:=REC.TRN_SRC_CD;
    REC_PIPE.TRN_TYPE_CD:=REC.TRN_TYPE_CD;
    REC_PIPE.TRN_VERSION:=REC.TRN_VERSION;
    REC_PIPE.UL_DELIVERY_DT:=REC.UL_DELIVERY_DT;
    REC_PIPE.UL_MATURITY_DT:=REC.UL_MATURITY_DT;
    REC_PIPE.UNDLY_ISS_NAME:=REC.UNDLY_ISS_NAME;
    REC_PIPE.UNEARNED_INCOME_CCY_CD:=REC.UNEARNED_INCOME_CCY_CD;
    REC_PIPE.UNEARNED_INCOME_DIS_LOAN_AMT:=REC.UNEARNED_INCOME_DIS_LOAN_AMT;
    REC_PIPE.US_GAAP_POSITION_AGGR:=REC.US_GAAP_POSITION_AGGR;
    REC_PIPE.VALUATION_DT:=REC.VALUATION_DT;
    REC_PIPE.XCHG_CTRY_CD:=REC.XCHG_CTRY_CD;
    REC_PIPE.YEAR01_NOTIONAL_AMT:=REC.YEAR01_NOTIONAL_AMT;
    REC_PIPE.YEAR02_NOTIONAL_AMT:=REC.YEAR02_NOTIONAL_AMT;
    REC_PIPE.YEAR03_NOTIONAL_AMT:=REC.YEAR03_NOTIONAL_AMT;
    REC_PIPE.YEAR04_NOTIONAL_AMT:=REC.YEAR04_NOTIONAL_AMT;
    REC_PIPE.YEAR05_NOTIONAL_AMT:=REC.YEAR05_NOTIONAL_AMT;
    REC_PIPE.YEAR06_NOTIONAL_AMT:=REC.YEAR06_NOTIONAL_AMT;
    REC_PIPE.YEAR07_NOTIONAL_AMT:=REC.YEAR07_NOTIONAL_AMT;
    REC_PIPE.YEAR08_NOTIONAL_AMT:=REC.YEAR08_NOTIONAL_AMT;
    REC_PIPE.YEAR09_NOTIONAL_AMT:=REC.YEAR09_NOTIONAL_AMT;
    REC_PIPE.YEAR10_NOTIONAL_AMT:=REC.YEAR10_NOTIONAL_AMT;
    REC_PIPE.YTD_PL_TRADE_AMT:=REC.YTD_PL_TRADE_AMT;
    REC_PIPE.FCL_VALIDATED_IND:=REC.FCL_VALIDATED_IND;
    REC_PIPE.FCL_EXCLUDED_IND:=REC.FCL_EXCLUDED_IND;
    REC_PIPE.FCL_SOURCE_SYSTEM:=REC.FCL_SOURCE_SYSTEM;
    REC_PIPE.FCL_CREATE_TIMESTAMP:=REC.FCL_CREATE_TIMESTAMP;
    REC_PIPE.FCL_UPDATE_TIMESTAMP:=REC.FCL_UPDATE_TIMESTAMP;
    REC_PIPE.FCL_TRADE_LOAD_DATE:=REC.FCL_TRADE_LOAD_DATE;
    REC_PIPE.FCL_VALUATION_LOAD_DATE:=REC.FCL_VALUATION_LOAD_DATE;
    REC_PIPE.FCL_MATRIX_TRADE_TYPE:=REC.FCL_MATRIX_TRADE_TYPE;
    REC_PIPE.PCP_GCDS_REVISION:=REC.PCP_GCDS_REVISION;
    REC_PIPE.FCL_UTP_BOOK_CD:=REC.FCL_UTP_BOOK_CD;
    REC_PIPE.FCL_MIS_RISK_BOOK_CD:=REC.FCL_MIS_RISK_BOOK_CD;
    REC_PIPE.ALD_STATUS:=REC.ALD_STATUS;
    REC_PIPE.FCAT_OPERATION:=REC.FCAT_OPERATION;
    REC_PIPE.INTERNAL_INTRA_IND_IFRS:=REC.INTERNAL_INTRA_IND_IFRS;
    REC_PIPE.FCL_USAGE_IND:=REC.FCL_USAGE_IND;
    REC_PIPE.QUICK:=REC.QUICK;
    REC_PIPE.SEDOL:=REC.SEDOL;
    REC_PIPE.CUSIP:=REC.CUSIP;
    REC_PIPE.ISIN:=REC.ISIN;
    REC_PIPE.QUANTITY:=REC.QUANTITY;
    REC_PIPE.RIC:=REC.RIC;
    REC_PIPE.DESCRIPTION:=REC.DESCRIPTION;
    REC_PIPE.RESET_DATE:=REC.RESET_DATE;
    REC_PIPE.PRICE:=REC.PRICE;
    REC_PIPE.EXCHANGERATE:=REC.EXCHANGERATE;
    REC_PIPE.PARA_PROD_TYPE_ID:=REC.PARA_PROD_TYPE_ID;
    REC_PIPE.EFF_INT_RATE:=REC.EFF_INT_RATE;
    REC_PIPE.ORIG_FEE_UNREALISED_AMT:=REC.ORIG_FEE_UNREALISED_AMT;
    REC_PIPE.ASSET_TYPE:=REC.ASSET_TYPE;
    REC_PIPE.IMP_IND_IFRS:=REC.IMP_IND_IFRS;
    REC_PIPE.NOT_AMT_REDEMPTION_TO_1YR:=REC.NOT_AMT_REDEMPTION_TO_1YR;
    REC_PIPE.NOT_AMT_REDEMPTION_TO_5YR:=REC.NOT_AMT_REDEMPTION_TO_5YR;
    REC_PIPE.NOT_AMT_REDEMPTION_OVER_5YR:=REC.NOT_AMT_REDEMPTION_OVER_5YR;
    REC_PIPE.CASH_LTD:=REC.CASH_LTD;
    REC_PIPE.CASH_LTD_CCY:=REC.CASH_LTD_CCY;
    REC_PIPE.FCL_FACILITY_SK:=REC.FCL_FACILITY_SK;
    REC_PIPE.DILUTION_RISK_CRITERIA:=REC.DILUTION_RISK_CRITERIA;
    REC_PIPE.FCL_RMS_RUNID:=REC.FCL_RMS_RUNID;
    REC_PIPE.BSTYPE:=REC.BSTYPE;
    REC_PIPE.YTD_ACCR_DIV:=REC.YTD_ACCR_DIV;
    REC_PIPE.YTD_DIV:=REC.YTD_DIV;
    REC_PIPE.ACC_ADJ_YTD:=REC.ACC_ADJ_YTD;
    REC_PIPE.FV_RLZD_PL:=REC.FV_RLZD_PL;
    REC_PIPE.ITD_PL:=REC.ITD_PL;
    REC_PIPE.YTD_RLZD_PL:=REC.YTD_RLZD_PL;
    REC_PIPE.YTD_UNRLZD_PL:=REC.YTD_UNRLZD_PL;
    REC_PIPE.TRN_BAS_LGD:=REC.TRN_BAS_LGD;
    REC_PIPE.TRAD_YTD_RLZD_PL:=REC.TRAD_YTD_RLZD_PL;
    REC_PIPE.TRAD_YTD_UNRLZD_PL:=REC.TRAD_YTD_UNRLZD_PL;
    REC_PIPE.UNADJUSTED_PV:=REC.UNADJUSTED_PV;
    REC_PIPE.UNADJUSTED_REALISED_PL:=REC.UNADJUSTED_REALISED_PL;
    REC_PIPE.UNADJUSTED_UNREALISED_PL:=REC.UNADJUSTED_UNREALISED_PL;
    REC_PIPE.GRC_PROD_TYPE_ID:=REC.GRC_PROD_TYPE_ID;
    REC_PIPE.GRC_PROD_TYPE_ID_IFRS:=REC.GRC_PROD_TYPE_ID_IFRS;
    REC_PIPE.CUR_FEE:=REC.CUR_FEE;
    REC_PIPE.SEC_IND:=REC.SEC_IND;
    REC_PIPE.INVEST_ASSETS_PORT:=REC.INVEST_ASSETS_PORT;
    REC_PIPE.RISK_PROVISION_START_BAL_AMT:=REC.RISK_PROVISION_START_BAL_AMT;
    REC_PIPE.GEN_RISK_PRV_START_BAL_AMT:=REC.GEN_RISK_PRV_START_BAL_AMT;
    REC_PIPE.GEN_RISK_PROVISION_NEW:=REC.GEN_RISK_PROVISION_NEW;
    REC_PIPE.GEN_RISK_PROVISION_REL:=REC.GEN_RISK_PROVISION_REL;
    REC_PIPE.ACQUIRED_IND:=REC.ACQUIRED_IND;
    REC_PIPE.CRED_NET_MNA:=REC.CRED_NET_MNA;
    REC_PIPE.SEC_IFRS_VUE_ADJ:=REC.SEC_IFRS_VUE_ADJ;
    REC_PIPE.YEAR00_NOTIONAL_AMT:=REC.YEAR00_NOTIONAL_AMT;
    REC_PIPE.MAX_OVERDUE_DAYS:=REC.MAX_OVERDUE_DAYS;
    REC_PIPE.MIS_CD:=REC.MIS_CD;
    REC_PIPE.MULTINAME_POOL_SK:=REC.MULTINAME_POOL_SK;
    REC_PIPE.MULTINAME_CHUNK_REVISION_SK:=REC.MULTINAME_CHUNK_REVISION_SK;
    REC_PIPE.MTRX_CALC_CNTRL_EPE:=REC.MTRX_CALC_CNTRL_EPE;
    REC_PIPE.MTRX_CALC_CNTRL_PFE:=REC.MTRX_CALC_CNTRL_PFE;
    REC_PIPE.FCL_TE_VALIDATED_IND:=REC.FCL_TE_VALIDATED_IND;
    REC_PIPE.PCP_TE_DYNAMIC_KEY:=REC.PCP_TE_DYNAMIC_KEY;
    REC_PIPE.AVG_COST:=REC.AVG_COST;
    REC_PIPE.SEC_LONG_NAME:=REC.SEC_LONG_NAME;
    REC_PIPE.ISS_CTRY_CD:=REC.ISS_CTRY_CD;
    REC_PIPE.RISK_REPORTING_UBR:=REC.RISK_REPORTING_UBR;
    REC_PIPE.RISK_COVERING_BRANCH_CCDB:=REC.RISK_COVERING_BRANCH_CCDB;
    REC_PIPE.CLEARING_STATUS:=REC.CLEARING_STATUS;
    REC_PIPE.FCL_CPTY_SK:=REC.FCL_CPTY_SK;
    REC_PIPE.STRGRP:=REC.STRGRP;
    REC_PIPE.OPEN_ENDED_FLAG:=REC.OPEN_ENDED_FLAG;
    REC_PIPE.AVAILABILITY_IND:=REC.AVAILABILITY_IND;
    REC_PIPE.ESCRW_FLG:=REC.ESCRW_FLG;
    REC_PIPE.ESTABLISHED_RELP_FLG:=REC.ESTABLISHED_RELP_FLG;
    REC_PIPE.GOV_GTY_AMT:=REC.GOV_GTY_AMT;
    REC_PIPE.BUBA_CTRY_ID:=REC.BUBA_CTRY_ID;
    REC_PIPE.GOV_GTY_CTRY_CD:=REC.GOV_GTY_CTRY_CD;
    REC_PIPE.INTERNET_DPST_FLG:=REC.INTERNET_DPST_FLG;
    REC_PIPE.LAST_ACTIVITY_DT:=REC.LAST_ACTIVITY_DT;
    REC_PIPE.NOTICE_PERIOD_QTY:=REC.NOTICE_PERIOD_QTY;
    REC_PIPE.OPR_RELP_FLG:=REC.OPR_RELP_FLG;
    REC_PIPE.SIG_WD_PENALTY_FLG:=REC.SIG_WD_PENALTY_FLG;
    REC_PIPE.TRN_ACT_FLG:=REC.TRN_ACT_FLG;
    PIPE ROW(REC_PIPE);
    END LOOP;
    the Stored procedre returns REC_PIPE.
    I thins this could be not efficient enough...
    What do you think?

  • Best way to store big amount of data

    Hi, i need to store a big amount of data, written in a txt its size is almost 12 mg, anyway it depends on the computer it runs, beacause what i want to store is all the shared files in a computer.
    Which is the best way to store it? Array string? textfile? List? I don�t need the data after the app close.
    Thanks

    Well, then which is the best solution? LinkedList or
    Tree? i only need to store the full path.
    What i didn�t say, my fail, is that i need to search
    for a file name once i have stored them...For searching, LinkedList will be very slow if it's very large. I think the same is true of javax.swing .tree.DefaultTreeModel, which is the JDK's only tree implementation. I don't know what Jakarta-collections has - it's possible they have a tree that offers fast searching. If you want to stick to the standard Java libraries, you'll want a Set for fast searching. TreeSet keeps the entries in sorted order. If you also need to display them as a tree, you can keep them in both a Set and a tree. If you don't have enough memory to do that, then displaying the whole tree isn't going to be useful to the user anyway, so rethink your goal.

  • Handle big amount of data

    Hello,
    I have to analyse a great amount of data (more than 20MByte), that I read from a logfile. I can't split these data into smaller parts because some of my analysis-methods need all data (regression,....).
    Are there any tricks for LabVIEW (5.0.1), how to handle big amounts of data?
    hans

    You might be able to do as you would like. If analysing process needs
    all amont of data,
    the whole process takes a couple of minutes but no problem. It gives you
    no information
    on the way of process like as hanging, so using "read files", not "read
    from spread
    sheet files", is better to have confirming process including the graphic
    monitor involved
    in the analysing vi that you like to build.
    Thanks in advance,
    Tom
    hans wrote:
    > Hello,
    >
    > I have to analyse a great amount of data (more than 20MByte), that I
    > read from a logfile. I can't split these data into smaller parts
    > because some of my analysis-methods need all data (regression,....).
    >
    > Are there any tricks for LabVIEW (5.0.1), how to handle big amounts of
    > data?
    >
    > hans

  • Socket - problem receiving big amount of data

    I have a client/server system and want to exchange xml data between them over a socket connection. I use a thread to read the received data, that look like this:
    BufferedReader in  = new BufferedReader ( new InputStreamReader ( socket.getInputStream() ) );
    while ( true )
      while ( in.ready() )
        System.out.println ( "received data: " + in.readLine() );
    }I'm having the problem, that the xml message is sometimes received incomplete, i.e. only portions of it are received, or more that one message can be read from the socket's input stream. I've tried so increase the socket's receive buffer size, but this doesn't help.
    I would expect, that the input stream notifies "ready", when the complete message was received, i.e. the last TCP packet was received. Is it possible to implement it thatway?

    The Socket class should be responsible for
    the TCP packets, e.g. request lost packets, sort the
    packets in the right order and deliver the
    complete message to the higher application
    level.The Socket class is not responsible for packets, the operating system TCP/IP stack is.
    Where you go wrong is assuming that TCP/IP delivers messages. It doesn't. It delivers a byte stream. "Messages" can be delivered in chunks of any size; it's at the mercy of all the routers etc in the network.
    It is the responsibility of the application to chop up the incoming bytes into messages if messages are desired. E.g. send a length first and then the message, or delimit messages somehow (e.g. with a new line character.)
    And don't use in.ready(), it will bite you in the azz one way or another.

  • Problem retrieving large amount of data!

    Hi,
    I'm currently working with a database application accessing very big amount of data. One query could result in 500 000+ hits!
    I would like to present this data in a JTable.
    When the query is executed, I create a two-dimensional array and store the data in it. The problem is that I get OutOfMemoryError when I reach the 150 000th row and add it to the array.
    I've looked into the design pattern "Value List Handler" and it seems it could be of use. But still I need to populate an array with all the data, and then I get the error.
    Is there somehow I could query the database, populate part of the data in a smaller array, use the "Value List Handler" pattern to access small portions of the complete resultset?
    Another problem is that the user wants ability to sort asc/desc by clicking columnheaders in the JTable. The I need to access all data in that table to make it be sorted right. Could re-query the database with a "ORDER BY <column> ASC" and use a modification of the value list handler pattern?
    I'm a bit confused, please help!
    Kind regards, Andreas

    The only chance that I you have: only select as many rows as you display on the screen. When the user hits "next page" retrieve the next rows.
    You might be able to do this with a scrollable resultset, but with that you are left to the mercy of the JDBC driver concerning memory management.
    So you need to think about a solution where you issue a new SELECT narrowing down the result set depending on the first/last row displayed
    Search this forum for pagewise navigation this question gets asked about 5 times a week (mostly in conjunction with web applications, but the logic behind it, should be the same).
    Tailoring your TableModel might be tricky as well.
    Thomas

  • Freeze when writing large amount of data to iPod through USB

    I used to take backups of my PowerBook to my 60G iPod video. Backups are taken with tar in terminal directly to mounted iPod volume.
    Now, every time I try to write a big amount of data to iPod (from MacBook Pro), the whole system freezes (mouse cursor moves, but nothing else can be done). When the USB-cable is pulled off, the system recovers and acts as it should. This problem happens every time a large amount of data is written to iPod.
    The same iPod works perfectly (when backupping) in PowerBook and small amounts of data can be easily written to it (in MacBook Pro) without problems.
    Does anyone else have the same problem? Any ideas why is this and how to resolve the issue?
    MacBook Pro, 2.0Ghz, 100GB 7200RPM, 1GB Ram   Mac OS X (10.4.5)   IPod Video 60G connected through USB

    Ex PC user...never had a problem.
    Got a MacBook Pro last week...having the same issues...and this is now with an exchanged machine!
    I've read elsewhere that it's something to do with the USB timing out. And if you get a new USB port and attach it (and it's powered separately), it should work. Kind of a bummer, but, those folks who tried it say it works.
    Me, I can upload to Ipod piecemeal, manually...but even then, it sometimes freezes.
    The good news is that once the Ipod is loaded, the problem shouldnt' happen. It's the large amounts of data.
    Apple should DEFINITELY fix this though. Unbelievable.
    MacBook Pro 2.0   Mac OS X (10.4.6)  

  • JSP and large amounts of data

    Hello fellow Java fans
    First, let me point out that I'm a big Java and Linux fan, but somehow I ended up working with .NET and Microsoft.
    Right now my software development team is working on a web tool for a very important microchips manufacturer company. This tool handles big amounts of data; some of our online reports generates more that 100.000 rows which needs to be displayed on a web client such as Internet Explorer.
    We make use of Infragistics, which is a set of controls for .NET. Infragistics allows me to load data fetched from a database on a control they call UltraWebGrid.
    Our problem comes up when we load large amounts of data on the UltraWebGrid, sometimes we have to load 100.000+ rows; during this loading our IIS server memory gets killed and could take up to 5 minutes for the server to end processing and display the 100.000+ row report. We already proved the database server (SQL Server) is not the problem, our problem is the IIS web server.
    Our team is now considering migrating this web tool to Java and JSP. Can you all help me with some links, information, or past experiences you all have had with loading and displaying large amounts of data like the ones we handle on JSP? Help will be greatly appreciated.

    Who in the world actually looks at a 100,000 row report?
    Anyway if I were you and I had to do it because some clueless management person decided it was a good idea... I would write a program in something that once a day, week, year or whatever your time period produced the report (in maybe a PDF fashion but you could do it in HTML if you really must have it that way) and have it as a static file that you link to from your app.
    Then the user will have to just wait while it downloads but the webserver or web applications server will not be bogged down trying to produce that monstrosity.

  • Copying large amount of data from one table to another getting slower

    I have a process that copies data from one table (big_tbl) into a very big archive table (vb_archive_tbl - 30 mil recs - partitioned table). If there are less than 1 million records in the big_tbl the copy to the vb_archive_table is fast (-10 min), but more importantly - it's consistant. However, if the number of records is greater than 1 million records in the big_tbl copying the data into the vb_archive_tbl is very slow (+30 min - 4 hours), and very inconsistant. Every few days the time it takes to copy the same amount of data grows signicantly.
    Here's an example of the code I'm using, which uses BULK COLLECT and FORALL INSERST to copy the data.
    I occasionally change 'LIMIT 5000' to see performance differences.
    DECLARE
    TYPE t_rec_type IS RECORD (fact_id NUMBER(12,0),
    store_id VARCHAR2(10),
    product_id VARCHAR2(20));
    TYPE CFF_TYPE IS TABLE OF t_rec_type
    INDEX BY BINARY_INTEGER;
    T_CFF CFF_TYPE;
    CURSOR c_cff IS SELECT *
    FROM big_tbl;
    BEGIN
    OPEN c_cff;
    LOOP
    FETCH c_cff BULK COLLECT INTO T_CFF LIMIT 5000;
    FORALL i IN T_CFF.first..T_CFF.last
    INSERT INTO vb_archive_tbl
    VALUES T_CFF(i);
    COMMIT;
    EXIT WHEN c_cff%NOTFOUND;
    END LOOP;
    CLOSE c_cff;
    END;
    Thanks you very much for any advice
    Edited by: reid on Sep 11, 2008 5:23 PM

    Assuming that there is nothing else in the code that forces you to use PL/SQL for processing, I'll second Tubby's comment that this would be better done in SQL. Depending on the logic and partitioning approach for the archive table, you may be better off doing a direct-path load into a staging table and then doing a partition exchange to load the staging table into the partitioned table. Ideally, you could just move big_tbl into the vb_archive_tbl with a single partition exchange operation.
    That said, if there is a need for PL/SQL, have you traced the session to see what is causing the slowness? Is the query plan different? If the number of rows in the table is really a trigger, I would tend to suspect that the number of rows is causing the optimizer to choose a different plan (with your sample code, the plan is obvious, but perhaps you omitted some where clauses to simplify things down) which may be rather poor.
    Justin

  • Handling Huge Amount of data in Browser

    Some information regarding large data handling in Web Browser. Browser data will be downloaded to the
    cache of local machine. So when the browser needs data to be downloaded in terms of MBs and GBs, how can we
    handle that ?
    requirement is as mentioned below.
    A performance monitoring application is collecting performance data of a system every 30 seconds.
    The size of the data collected can be around 10 KB for each interval and it is logged to a database. If this application
    runs for one day, the statistical data size will be around 30 MB (28.8 MB) . If it runs for one week, the data size will be
    210 MB. There is no limitation on the number of days from the software perspective.
    User needs to see this statistical data in the browser. We are not sure if this huge amount of data transfer to the
    browser in one instance is feasible. The user should be able to get the overall picture of the logged data for a
    particular period and if needed, should be able to drill down step by step to lesser ranges.
    For e.g, if the user queries for data between the dates 10'th Nov to 20'th Nov, the user expects to get an overall idea of
    the 11 days data. Note that it is not possible to show each 30 second data when showing 11 days data. So some logic
    has to be applied to present the 11 days data in a reasonably acceptable form. Then the user can go and select a
    particular date in the graph and the data for that day alone should be shown with a better granularity than the overall
    graph.
    Note: The applet may not be a signed applet.

    How do you download gigabytes of data to a browser? The answer is simple. You don't. A data analysis package like the one you describe should run on the server and send the requested summary views to the browser.

  • Large Amount of Data in JSF

    Hello,
    I am using the Table Group component for displaying data in my application designed in Java Studio Creator.
    I have enabled paging on the component. I use CachedRowSet on the bean for the page for getting the data. This works very well at the moment in my development environment. At the moment I am testing on small amount of data.
    I was wondering how does this component perform with very large amounts of data (>75,000 rows). I noticed that there is a button available for users to retrieve all the rows. So I was wondering apart from that instance, when viewing in a paged mode does the component get all the results from the database everytime ?
    Which component would be best suited for displaying large amounts of data in a table format?
    Thanks In Advance!!

    Thanks for your reply. The table control that I use does have paging as a feature and I have enabled it. It still takes time to load the data initially.
    I wonder if it is got to do with the logic of paging. How do you specify which set of 20 records to extract from SQL.
    Thanks for your help!!

  • Airport Extreme Intermittent Network Interruption when Downloading Large Amounts of Data.

    I've had an Airport Extreme Base Station for about 2.5 years and have had no problems until the last 6 months.  I have my iMac and a PC directly connected through ethernet and another PC connected wirelessly.  I occasionally need to download very large data files that max out my download connection speed at about 2.5Mbs.  During these downloads, my entire network loses connection to the internet intermittently for between 2 and 8 seconds with a separation between connection losses at around 20-30 seconds each.  This includes the hard wired machines.  I've tested a download with a direct connection to my cable modem without incident.  The base station is causing the problem.  I've attempted to reset the Base Station with good results after reset, but then the problem simply returns after a while.  I've updated the firmware to latest version with no change. 
    Can anyone help me with the cause of the connection loss and a method of preventing it?  THIS IS NOT A WIRELESS PROBLEM.  I believe it has to do with the massive amount of data being handled.  Any help would be appreciated.

    Ok, did some more sniffing around and found this thread.
    https://discussions.apple.com/thread/2508959?start=0&tstart=0
    It seems that the AEBS has had a serious flaw for the last 6 years that Apple has been unable to address adequately.  Here is a portion of the log file.  It simply repeats the same log entries over and over.
    Mar 07 21:25:17
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 21:25:17
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 21:26:17
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 21:26:17
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:30:43
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:36:41
    Severity:5
    Clock synchronized to network time server time.apple.com (adjusted +0 seconds).
    Mar 07 21:55:08
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 21:55:08
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 21:55:32
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 21:55:33
    Severity:5
    Rotated CCMP group key.
    Mar 07 21:59:47
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:24:53
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 22:24:53
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Mar 07 22:25:18
    Severity:5
    Disassociated with station 58:55:ca:c7:c2:ae
    Mar 07 22:25:18
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:30:43
    Severity:5
    Rotated CCMP group key.
    Mar 07 22:36:42
    Severity:5
    Clock synchronized to network time server time.apple.com (adjusted -1 seconds).
    Mar 07 22:54:37
    Severity:5
    Associated with station 58:55:ca:c7:c2:ae
    Mar 07 22:54:37
    Severity:5
    Installed unicast CCMP key for supplicant 58:55:ca:c7:c2:ae
    Anyone have any ideas why this is happening?

  • Send large amounts of data

    Hello everyone,
    I made an applet that receives data, signs and returns the signed data. When the amount of data is too big, I break it into blocks of 255 bytes and use the method Signature.update.
    OK, this is working fine, but perfomarnce is poor due to large amount of blocks. Is it possible to increase the size of the blocks?
    Thanks.

    Hi,
    You cannot change the block size but you can change what you send in.
    You may get better performance by sending multiples of your hash function block size as the card will not have to do internal buffering.
    You could also do as much of the work in your host application as possible and then just send in the data that you need to operate on with the private key. Generating the hash of the message does not require a private key so can be done in your host. You then send the result of the hash to the card to be encrypted with the private key. This will be the fastest method.
    Cheers,
    Shane

  • DSS problems when publishing large amount of data fast

    Has anyone experienced problems when sending large amounts of data using the DSS. I have approximately 130 to 150 items that I send through the DSS to communicate between different parts of my application.
    There are several loops publishing data. One publishes approximately 50 items in a rate of 50ms, another about 40 items with 100ms publishing rate.
    I send a command to a subprogram (125ms) that reads and publishes the answer on a DSS URL (app 125 ms). So that is one item on DSS for about 250ms. But this data is not seen on my man GUI window that reads the DSS URL.
    My questions are
    1. Is there any limit in speed (frequency) for data publishing in DSS?
    2. Can DSS be unstable if loaded to much?
    3. Can I lose/miss data in any situation?
    4. In the DSS Manager I have doubled the MaxItems and MaxConnections. How will this affect my system?
    5. When I run my full application I have experienced the following error Fatal Internal Error : ”memory.ccp” , line 638. Can this be a result of my large application and the heavy load on DSS? (se attached picture)
    Regards
    Idriz Zogaj
    Idriz "Minnet" Zogaj, M.Sc. Engineering Physics
    Memory Profesional
    direct: +46 (0) - 734 32 00 10
    http://www.zogaj.se

    LuI wrote:
    >
    > Hi all,
    >
    > I am frustrated on VISA serial comm. It looks so neat and its
    > fantastic what it supposes to do for a develloper, but sometimes one
    > runs into trouble very deep.
    > I have an app where I have to read large amounts of data streamed by
    > 13 µCs at 230kBaud. (They do not necessarily need to stream all at the
    > same time.)
    > I use either a Moxa multiport adapter C320 with 16 serial ports or -
    > for test purposes - a Keyspan serial-2-USB adapter with 4 serial
    > ports.
    Does it work better if you use the serial port(s) on your motherboard?
    If so, then get a better serial adapter. If not, look more closely at
    VISA.
    Some programs have some issues on serial adapters but run fine on a
    regular serial port. We've had that problem recent
    ly.
    Best, Mark

  • Storing large amounts of data

    Hello,
    I'd like to use Berkeley DB for logging large amounts of data - i.e. structures that are ~400KB in size and I need to store them ~10 times per second for up to several hours, but I get into quite big performance issues the more records I insert into the database. I've set the pagesize to its maximum (64KB - I split my data into several packages so it doesn't get stored on an overflow page) and experimented with several chache sizes (8MB, 64MB, 2GB, 4GB), but I haven't managed to get rid of the performance issues, independent of which access method I use (although I got the "best" results when using DB_QUEUE, but that varies heavily from day to day).
    To get to the point: Performance starts at "0" seconds per insert (where 1 "insert" = 7 real inserts because of splitting up the data), between the 16750. and 17000. insertion it takes ~0.00352 per insert and when reaching the 36000. insertion it already takes about 0.0074 seconds per insert, and so on ...
    Does anyone have an idea on how I can increase my performance? Because when the time needed for each insertion keeps increasing over time, it's not possible to keep the program running at its intended speed at some point.
    Thanks,
    Thomas

    Hello,
    A good starting point are the suggestions in the Berkeley DB Reference Guide at:
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/am_misc_tune.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_tune.html
    http://www.oracle.com/technology/documentation/berkeley-db/db/programmer_reference/transapp_throughput.html
    Thanks,
    Sandra

Maybe you are looking for

  • Why can't I sync my iphone with itunes on 2 pc's?

    When I first purchased my iphone I synced it with itunes on the family pc. Now that I also have a laptop, which is my main source for all things computerised, I am unable to do anything other than software updates and add music. None of the apps that

  • How do I save a PDF form to use as a master?

    I have a PDF form that I want to use as master on the ipad 3. I've got the form on the ipad and it opens and I can complete it and e-mail it without problem but when I go back to the "master" it has retained the info entered previously. Is there a wa

  • Data loading Issue in BW

    Hi , We are facing an issue while loading language text data by using data Source 0HAP_V_LIST_TEXT in to BW, It showing that "Language is not valid". Please find the below screen shot. To resolve this issue, that language is need to be installed in t

  • Error in XI server: JCO_COMMUNICATION_FAILURE in Message Mapping

    Hi all, I am unable to execute even a simple scenario on my server. The status of the message in moni is random. I get either "Scheduled for outbound processing" in,or the following error(in Request Message Mapping): SAP:Error xmlns:SAP="http://sap.c

  • Want to become SAP professional.

    Hello Everyone, I am B.Tech in Electronics and Comm having 6 years of experience in teaching (my own academy) earlier and at present, 4 years of experience in IT (managing Network/Google Apps/Server/Network Monitoring) and want my career to switch ov