APQD table size is fastly increasing with huge size

Hi All,
File system space has been increasing very fast. We analyzed
the tables for the large size, from which we found APQD table size is
relatively very large in size in Production system. So we are planning
to delete old session logs till last 5 days from the APQD table. We
reffered the note
number "36781 - Table APQD is very large", and run the
report RSBDCCKA and RSBDCCKT and then "RSBDCREO", but found no
significant change at table level and file system as well. (attached is
the screenshot for the same).
We are planning to re-organize the APQD table, so that we can have free
size. Can you please help me to know, reorganization of APQD table will
help to gain segnificant space? We have APQD table size = 350 GB. So
the re-org will generate huge archive log files which needs to be taken
backup. To avoid this can we have other strategy to tackle this problem?
Quick help is appriciated to sort out this problem in production
environment.
-Anthony...

> number "36781 - Table APQD is very large", and run the
> report RSBDCCKA and RSBDCCKT and then "RSBDCREO", but found no
> significant change at table level and file system as well. (attached is
As far as I remember, RSBDCREO is client dependent.
If you did run the report in client 000 it might not affect the old
batchinput data in your business client.
A reorg only makes sense, if you did a significant amount of deletes before.
As for filesystem space, you should see a reduced amount of files in
the batch input subdirectories in GLOBAL_DIR.
Volker

Similar Messages

  • Table APQD its size is 64GB in Ecc 6.0

    Hi expert,
    we are using ECC 6.0 with oracle 10g.
    in my ecc 6.0 a table APQD its size is 64GB.
    how can i remove the data.if i remove the session in sm35 then its size will reduce??
    Regards,

    Hi,
    Please read following SAP note and related notes.
    Note 36781 - Table APQD is very large
    Hope this helps.
    Regards,
    Varun

  • Index size increases than table size

    Hi All,
    Let me know what are the possible reasons for index size greater than the table size and in some cases index size smaller than table size . ASAP
    Thanks in advance
    sherief

    hi,
    The size of a index depends how inserts and deletes occur.
    With sequential indexes, when records are deleted randomly the space will not be reused as all inserts are in the leading leaf block.
    When all the records in a leaf blocks have been deleted then leaf block is freed (put on index freelist) for reuse reducing the overall percentage of free space.
    This means that if you are deleting aged sequence records at the same rate as you are inserting, then the number of leaf blocks will stay approx constant with a constant low percentage of free space. In this case it is most probably hardly ever worth rebuilding the index.
    With records being deleted randomly then, the inefficiency of the index depends on how the index is used.
    If numerous full index (or range) scans are being done then it should be re-built to reduce the leaf blocks read. This should be done before it significantly affects the performance of the system.
    If index access’s are being done then it only needs to be rebuilt to stop the branch depth increasing or to recover the unused space
    here is a exemple how index size can become larger than table size:
    Connected to Oracle Database 10g Enterprise Edition Release 10.2.0.3.0
    Connected as admin
    SQL> create table rich as select rownum c1,'Verde' c2 from all_objects;
    Table created
    SQL> create index rich_i on rich(c1);
    Index created
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> delete from rich where mod(c1,2)=0;
    29475 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1179648 144 9
    INDEX 1179648 144 9
    SQL> insert into rich select rownum+100000, 'qq' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 1703936 208 13
    INDEX 2097152 256 16
    SQL> insert into rich select rownum+200000, 'aa' from all_objects;
    58952 rows inserted
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> delete from rich where mod(c1,2)=0;
    58952 rows deleted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 2752512 336 21
    INDEX 3014656 368 23
    SQL> insert into rich select rownum+300000, 'hh' from all_objects;
    58952 rows inserted
    SQL> commit;
    Commit complete
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 4063232 496 31
    SQL> alter index rich_i rebuild;
    Index altered
    SQL> select segment_type,bytes,blocks,extents from user_segments where segment_name like 'RICH%';
    SEGMENT_TYPE BYTES BLOCKS EXTENTS
    TABLE 3014656 368 23
    INDEX 2752512 336 21
    SQL>

  • Problem with table size (initial extent)

    Hi,
    I have imported a table from my client's database, which shows the following size parameters as displayed from the user_segments table :-
    bytes : 33628160
    blocks : 4105
    extents : 1
    initial_extent : 33611776
    next_extent : 65536
    The number of rows in the table is 0 (zero). I am wondering how the table size could become so large, while other tables in the schema in the same tablespace have normal initial extent size.
    I then created a tablespace with an initial and next extent of 64k each, and imported the data into the tablespace, after which the table size and the initial extent for the table remained to be 33611776. This is the problem with 4-5 other tables out of a total of 500 tables.
    Of course if i drop and recreate the table, there is no problem, and the initial extent size and the table size becomes 64k, same as per the tablespace.
    Any suggestions? I do not want to drop the tables and recreate them.
    Because of this problem, even an attempt to import a blank database is consuming 2 GB of hard disk space.
    Thanks in advance
    DSG

    I don't think you can stop the extent from being allocated when you import the table.
    Even if you try to let the table inherit storage parameters from the tablespace, it will still allocate as many 64K extents as it needs to get to the 33M size in the table's (imported) storage parameter. I have also seen that when trying to change storage during an import like that, you can look in dba_tables and see the table has an ititial setting of 33M even though when you look in dba_segments you'll see that every extent allocated was in fact 64K. The dba_tables table is getting populated directly from the import and will therefore report the wrong number.
    Perhaps you can import then create table as... to put the tables in a better storage set up. (Letting tables inherit from the tablespace is the best way to go...no fragmentation that way). You might want to get the client to let you revamp the storage since theres no good reason to have one huge extent like that.

  • The Size of prddata increasing with great speed . dbf files are increasing

    Hi
    the Size of prddata increasing with great speed . dbf files are increasing which result in 100 % space utilization.
    can I delete temp01.dbf files.what is possible solution for this as dbf files are increasing with unexpected great speed.
    Thanks
    Rahul

    db version is 9.2.0.6.0
    Application 11.5.10.2
    Allert.log is showing error
    ORA-04098: trigger 'APPS.XXXX_APPS_LOGON_TRG' is invalid and failed re-validation
    output of query
    select object_name
    from dba_objects
    where object_type = 'TRIGGER'
    and status = 'INVALID';
    OBJECT_NAME
    AX_AP_CHECKS_ARU1
    AX_AP_CHECKS_ARU2
    AX_AP_CHECKS_BRI1
    AX_AP_HOLDS_ARI1
    AX_AP_HOLDS_ARU1
    AX_AP_HOLD_CODES_ARU1
    AX_AP_INVOICES_ARU1
    AX_AP_INVOICES_ARU2
    AX_AP_INVOICES_ARU3
    AX_AP_INVOICES_BRDI1
    AX_AP_INVOICE_DIST_ARDI1
    OBJECT_NAME
    AX_AP_INVOICE_DIST_ARDI2
    AX_AP_INVOICE_DIST_ARU1
    AX_AP_INVOICE_DIST_ARU2
    AX_AP_INVOICE_PAY_ARDI1
    AX_DOCUMENT_STATUSES_BRU1
    AX_DOCUMENT_STATUSES_BRU2
    BEN_EXT_ADD_EVT
    FF_GLOBALS_F_BRI
    GHR_PER_ADDRESSES_AFIUD
    HRI_EDW_CMPSTN_SNPSHT_DTS_WHO
    HRI_EDW_USER_PERSON_TYPES_WHO
    OBJECT_NAME
    HRI_GEN_HRCHY_SUMMARY_WHO
    HRI_INV_SPRTN_RSNS_WHO
    HRI_MB_WMV_WHO
    HRI_ORG_HRCHY_SUMMARY_WHO
    HRI_ORG_PARAMS_WHO
    HRI_ORG_PARAM_LIST_WHO
    HRI_PRIMARY_HRCHYS_WHO
    HRI_RECRUITMENT_STAGES_WHO
    HRI_SERVICE_BANDS_WHO
    HRI_SUPV_HRCHY_SUMMARY_WHO
    HR_ORG_INFO_BRI
    OBJECT_NAME
    HR_PAY_IF_ASG_COST_BRD
    HR_PAY_IF_ELE_ENT_ASD
    HR_PAY_IF_PERSON_ARU
    HR_PAY_IF_PPM_BRD
    HR_PAY_IF_PPM_NO_UPDATE2_ARU
    HR_PAY_IF_PPM_NO_UPDATE_ARU
    HR_PA_MAINTAIN_ORG_HIST_BRI
    HXC_ALIAS_DEFINITIONS_WHO
    HXC_ALIAS_TYPES_WHO
    HXC_ALIAS_TYPE_COMPONENTS_WHO
    HXC_ALIAS_VALUES_WHO
    OBJECT_NAME
    HXC_APPROVAL_COMPS_WHO
    HXC_APPROVAL_PERIOD_COMPS_WHO
    HXC_APPROVAL_PERIOD_SETS_WHO
    HXC_APPROVAL_STYLES_WHO
    HXC_BLD_BLK_INFO_TYPE_USAG_WHO
    HXC_DATA_APP_RULE_USAGES_WHO
    HXC_DEPOSIT_PROCESSES_WHO
    HXC_ENTITY_GROUPS_WHO
    HXC_ENTITY_GROUP_COMPS_WHO
    HXC_ERRORS_WHO
    HXC_LAYOUTS_WHO
    OBJECT_NAME
    HXC_LAYOUT_COMPONENTS_WHO
    HXC_LAYOUT_COMP_DEFINITION_WHO
    HXC_LAYOUT_COMP_PROMPTS_WHO
    HXC_LAYOUT_COMP_QUALIFIERS_WHO
    HXC_LAYOUT_COMP_QUAL_RULES_WHO
    HXC_MAPPINGS_WHO
    HXC_MAPPING_COMPONENTS_WHO
    HXC_MAPPING_COMP_USAGES_WHO
    HXC_PREF_DEFINITIONS_WHO
    HXC_PREF_HIERARCHIES_WHO
    HXC_RECURRING_PERIODS_WHO
    OBJECT_NAME
    HXC_RESOURCE_RULES_WHO
    HXC_RETRIEVAL_PROCESSES_WHO
    HXC_RETRIEVAL_RANGES_WHO
    HXC_RETRIEVAL_RULES_WHO
    HXC_RETRIEVAL_RULE_COMPS_WHO
    HXC_TIME_BUILDING_BLOCKS_WHO
    HXC_TIME_CATEGORIES_WHO
    HXC_TIME_CATEGORY_COMPS_WHO
    HXC_TIME_ENTRY_RULES_WHO
    HXC_TIME_RECIPIENTS_WHO
    HXC_TIME_SOURCES_WHO
    OBJECT_NAME
    HXC_TK_GROUPS_WHO
    HXC_TK_GROUP_QUERIES_WHO
    HXC_TK_GROUP_QUERY_CRITERI_WHO
    HXC_TRANSACTIONS_WHO
    HXC_TRANSACTION_DETAILS_WHO
    HXT_ADD_ASSIGN_INFO_F_WHO
    HXT_ADD_ELEM_INFO_F_WHO
    HXT_DET_HOURS_WORKED_F_WHO
    HXT_EARNING_POLICIES_WHO
    HXT_EARNING_RULES_WHO
    HXT_EARN_GROUPS_WHO
    OBJECT_NAME
    HXT_EARN_GROUP_TYPES_WHO
    HXT_ERRORS_F_WHO
    HXT_HOLIDAY_CALENDARS_WHO
    HXT_HOLIDAY_DAYS_WHO
    HXT_HOUR_DEDUCTION_RULES_WHO
    HXT_HOUR_DEDUCT_POLICIES_WHO
    HXT_PREM_ELIGBLTY_POLICIES_WHO
    HXT_PREM_ELIGBLTY_POL_RULE_WHO
    HXT_PREM_ELIGBLTY_RULES_WHO
    HXT_PREM_INTERACT_POLICIES_WHO
    HXT_PREM_INTERACT_RULES_WHO
    OBJECT_NAME
    HXT_PROJECTS_WHO
    HXT_ROTATION_PLANS_WHO
    HXT_ROTATION_SCHEDULES_WHO
    HXT_SHIFTS_WHO
    HXT_SHIFT_DIFF_POLICIES_WHO
    HXT_SHIFT_DIFF_RULES_WHO
    HXT_SUM_HOURS_WORKED_F_WHO
    HXT_TASKS_WHO
    HXT_TIMECARDS_F_WHO
    HXT_VARIANCES_WHO
    HXT_WEEKLY_WORK_SCHEDULES_WHO
    OBJECT_NAME
    HXT_WORK_SHIFTS_WHO
    OTA_ACTIVITY_DEFINITIONS_WHO
    OTA_BOOKING_DEALS_WHO
    OTA_BOOKING_STATUS_HISTORI_WHO
    OTA_CATEGORY_USAGES_WHO
    OTA_COMPETENCE_LANGUAGES_WHO
    OTA_CROSS_CHARGES_WHO
    OTA_DELEGATE_BOOKINGS_WHO
    OTA_EVENT_ASSOCIATIONS_WHO
    OTA_FINANCE_HEADERS_WHO
    OTA_FINANCE_LINES_WHO
    OBJECT_NAME
    OTA_HR_GL_FLEX_MAPS_WHO
    OTA_ILN_XML_PROCESSES_WHO
    OTA_NOTRNG_HISTORIES_WHO
    OTA_PRICE_LISTS_WHO
    OTA_PRICE_LIST_ENTRIES_WHO
    OTA_RESOURCE_ALLOCATIONS_WHO
    OTA_RESOURCE_DEFINITIONS_WHO
    OTA_RESOURCE_USAGES_WHO
    OTA_SKILL_PROVISIONS_WHO
    OTA_SUPPLIABLE_RESOURCES_WHO
    OTA_TRAINING_PLANS_WHO
    OBJECT_NAME
    OTA_TRAINING_PLAN_COSTS_WHO
    OTA_TRAINING_PLAN_MEMBERS_WHO
    OTA_VENDOR_SUPPLIES_WHO
    PAY_ASSIGNMENT_ACTIONS_BRD
    PAY_ASSIGNMENT_ACTIONS_BRU
    PAY_BALANCE_BATCH_LINES_BRIUD
    PAY_BALANCE_FEEDS_ARD
    PAY_BALANCE_FEEDS_ARI
    PAY_BALANCE_FEEDS_ARU
    PAY_DEFINED_BALANCES_ARI
    PAY_DEFINED_BALANCES_BRD
    OBJECT_NAME
    PAY_ELEMENT_LINKS_T1
    PAY_ELEMENT_TYPES_T1
    PAY_ORG_PAYMENT_METHODS_F_OVN
    PAY_PAYMIX_LINE_INSERT
    PAY_PAYROLL_ACTIONS_BRD
    PAY_PAYROLL_ACTIONS_BRU
    PAY_PERSONAL_PAYMENT_METHO_OVN
    PAY_PERSONAL_PAY_METHODS_BRUI
    PAY_RUN_RESULTS_BRD
    PAY_TRIGGER_EVENTS_ARD
    PAY_ZA_TEMP_BRANCH_DETAILS_ARD
    OBJECT_NAME
    PAY_ZA_TEMP_BRANCH_DETAILS_ARI
    PAY_ZA_TEMP_BRANCH_DETAILS_ASI
    PA_MAINTAIN_ORG_HIST_BRD
    PER_ABV_OVN
    PER_ALL_ASSIGNMENTS_F_OVN
    PER_ALL_PEOPLE_F_ARIU
    PER_ALL_PEOPLE_F_OVN
    PER_ESTABLISHMENT_ATTN_CHLG
    PER_JOB_REQUIREMENTS_OVN
    PER_ORG_STRUCTURE_ELEMENTS_OVN
    PER_ORS_OVN
    OBJECT_NAME
    PER_OSV_OVN
    PER_PAY_PROPOSALS_CHLG
    PER_PAY_PROPOSALS_OVN
    PER_PERIODS_OF_SERVICE_OVN
    PER_POS_STRUCTURE_ELEMENTS_OVN
    PER_PROPOSAL_COMPS_OVN
    PER_QUERY_CRITERIA_OVN
    PER_RECRUITMENT_ACTIVITIES_OVN
    PER_REC_ACTIVITY_FOR_OVN
    PER_REQUISITIONS_OVN
    PER_VALID_GRADES_OVN
    OBJECT_NAME
    SET_HXT_BATCH_STATES_INS
    SSP_DEL_ORPHANED_ROWS
    SSP_MEDICALS_WHO
    SSP_STOPPAGES_WHO
    SSP_WITHHOLDING_REASONS_WHO
    BEN_ABR_EXTRA_INFO_WHO
    BEN_ABR_INFO_TYPES_WHO
    BEN_ACRS_PTIP_CVG_F_WHO
    BEN_BENEFICIARIES_F_WHO
    BEN_ACTL_PREM_VRBL_RT_RL_F_WHO
    BEN_ACTN_TYP_WHO
    OBJECT_NAME
    BEN_COVERED_DEPENDENTS_F_WHO
    BEN_VALID_DEPENDENT_TYPES_WHO
    BEN_ACTY_BASE_RT_CTFN_F_WHO
    BEN_ACTY_BASE_RT_F_WHO
    BEN_ACTY_RT_PYMT_SCHED_F_WHO
    BEN_ACTY_VRBL_RT_F_WHO
    BEN_AGE_FCTR_WHO
    BEN_AGE_RT_F_WHO
    BEN_APLCN_TO_BNFT_POOL_F_WHO
    OTA_ANNOUNCEMENTS_WHO
    BEN_APLD_DPNT_CVG_ELIG_PRF_WHO
    OBJECT_NAME
    BEN_BATCH_BNFT_CERT_INFO_WHO
    BEN_BATCH_COMMU_INFO_WHO
    BEN_BATCH_LER_INFO_WHO
    BEN_BATCH_PARAMETER_WHO
    HXT_TIMECLOCK_ERRORS_WHO
    BEN_BNFT_POOL_RLOVR_RQMT_F_WHO
    BEN_BNFT_PRVDR_POOL_F_WHO
    BEN_BNFT_VRBL_RT_F_WHO
    BEN_BNFT_VRBL_RT_RL_F_WHO
    FF_FORMULAS_F_WHO
    FF_ROUTE_PARAMETER_VALUES_WHO
    OBJECT_NAME
    BEN_CMBN_PLIP_F_WHO
    HXC_LOCKING_RULES_WHO
    GHR_COMPLAINTS2_WHO
    BEN_CM_TYP_F_WHO
    GHR_COMPL_AGENCY_APPEALS_WHO
    GHR_COMPL_CA_DETAILS_WHO
    GHR_COMPL_CA_HEADERS_WHO
    GHR_CPDF_TEMP_WHO
    GHR_DUAL_ACTIONS_WHO
    GHR_DUAL_PROC_METHODS_WHO
    HXC_SEEDDATA_BY_LEVEL_WHO
    OBJECT_NAME
    GHR_DUTY_STATIONS_F_WHO
    GHR_GROUPBOX_USERS_WHO
    BEN_CM_TYP_USG_F_WHO
    BEN_DED_SCHED_PY_FREQ_WHO
    BEN_DPNT_CVG_RQD_RLSHP_F_WHO
    GHR_POIS_WHO
    GHR_POSITION_DESCRIPTIONS_WHO
    BEN_DPNT_CVRD_ANTHR_PL_CVG_WHO
    GHR_PREMIUM_PAY_INDICATORS_WHO
    GHR_PROCESS_LOG_WHO
    OTA_IMPORT_HISTORIES_WHO
    OBJECT_NAME
    GHR_RESTRICTED_PROC_METHOD_WHO
    OTA_LEARNING_PATHS_WHO
    GHR_RIF_REGISTERS_WHO
    BEN_DPNT_OTHR_PTIP_RT_F_WHO
    GHR_ROUTING_GROUPS_WHO
    GHR_ROUTING_LISTS_WHO
    GHR_ROUTING_LIST_MEMBERS_WHO
    BEN_DSBLD_RT_F_WHO
    BEN_DSGNTR_ENRLD_CVG_F_WHO
    BEN_DSGN_RQMT_RLSHP_TYP_WHO
    BEN_EE_STAT_RT_F_WHO
    OBJECT_NAME
    BEN_ELCTBL_CHC_CTFN_WHO
    BEN_ELIGY_PRFL_RL_F_WHO
    BEN_ELIG_AGE_CVG_F_WHO
    BEN_ELIG_COMPTNCY_PRTE_F_WHO
    BEN_ELIG_NO_OTHR_CVG_PRTE_WHO
    HR_CANVAS_PROPERTIES_WHO
    BEN_ELIG_PRTT_ANTHR_PL_PRT_WHO
    HR_COMMENTS_WHO
    HR_DE_LIABILITY_PREMIUMS_F_WHO
    BEN_ELIG_PSTN_PRTE_F_WHO
    BEN_ELIG_PYRL_PRTE_F_WHO
    OBJECT_NAME
    BEN_ELIG_QUAL_TITL_PRTE_F_WHO
    HR_DM_APPLICATION_GROUPS_WHO
    BEN_ELIG_SCHEDD_HRS_PRTE_F_WHO
    HR_DM_DATABASES_WHO
    BEN_ELIG_SP_CLNG_PRG_PRTE_WHO
    HR_DM_DT_DELETES_WHO
    BEN_ELIG_STDNT_STAT_CVG_F_WHO
    HR_DM_EXP_IMPS_WHO
    BEN_ELIG_SUPPL_ROLE_PRTE_F_WHO
    HR_DM_GROUPS_WHO
    BEN_ELIG_SVC_AREA_PRTE_F_WHO
    OBJECT_NAME
    BEN_ELIG_TBCO_USE_PRTE_F_WHO
    HR_DM_HIERARCHIES_WHO
    HR_DM_LOADER_PARAMS_WHO
    BEN_ELIG_TO_PRTE_RSN_F_WHO
    BEN_ELIG_TTL_CVG_VOL_PRTE_WHO
    HR_DM_LOADER_PHASE_ITEMS_WHO
    BEN_ELIG_TTL_PRTT_PRTE_F_WHO
    HR_DM_MIGRATIONS_WHO
    HR_DM_MIGRATION_RANGES_WHO
    HR_DM_MIGRATION_REQUESTS_WHO
    HR_DM_PHASE_ITEMS_WHO
    OBJECT_NAME
    BEN_ENRLD_ANTHR_OIPL_RT_F_WHO
    BEN_ENRLD_ANTHR_PLIP_RT_F_WHO
    HR_DM_SEQUENCE_DEFINITIONS_WHO
    HR_DM_SEQUENCE_RANGES_WHO
    BEN_ENRT_BNFT_WHO
    OTA_TEST_QUESTIONS_WHO
    BEN_ENRT_PREM_WHO
    BEN_ENRT_PREM_RBV_WHO
    HR_EFC_ACTIONS_WHO
    OTA_UTEST_QUESTIONS_WHO
    OTA_UTEST_RESPONSES_WHO
    OBJECT_NAME
    BEN_ENRT_RT_WHO
    HR_EFC_ROUNDING_ERRORS_WHO
    BEN_ENRT_RT_RBV_WHO
    HR_EFC_WORKER_AUDITS_WHO
    BEN_EXTRA_INPUT_VALUES_WHO
    HR_EXCEPTION_USAGES_WHO
    BEN_EXT_CHG_EVT_LOG_WHO
    HR_FORM_CANVASES_B_WHO
    BEN_EXT_CRIT_CMBN_WHO
    BEN_EXT_CRIT_PRFL_WHO
    BEN_EXT_CRIT_TYP_WHO
    OBJECT_NAME
    BEN_EXT_CRIT_VAL_WHO
    BEN_EXT_DATA_ELMT_WHO
    BEN_EXT_DATA_ELMT_DECD_WHO
    BEN_EXT_DATA_ELMT_IN_RCD_WHO
    BEN_EXT_DFN_WHO
    BEN_EXT_FILE_WHO
    BEN_EXT_FLD_WHO
    HR_FORM_DATA_GROUPS_B_WHO
    BEN_EXT_INCL_CHG_WHO
    BEN_EXT_INCL_DATA_ELMT_WHO
    BEN_EXT_RCD_WHO
    OBJECT_NAME
    BEN_EXT_RCD_IN_FILE_WHO
    BEN_EXT_RSLT_WHO
    BEN_EXT_RSLT_DTL_WHO
    BEN_EXT_RSLT_ERR_WHO
    BEN_EXT_WHERE_CLAUSE_WHO
    BEN_FL_TM_PT_TM_RT_F_WHO
    BEN_GD_OR_SVC_TYP_WHO
    BEN_GNDR_RT_F_WHO
    BEN_GRADE_RT_F_WHO
    HR_FORM_DATA_GROUP_ITEMS_WHO
    BEN_HRLY_SLRD_RT_F_WHO
    OBJECT_NAME
    HR_FORM_ITEMS_B_WHO
    BEN_HRS_WKD_IN_PERD_FCTR_WHO
    BEN_HRS_WKD_IN_PERD_RT_F_WHO
    BEN_JOB_RT_F_WHO
    BEN_LBR_MMBR_RT_F_WHO
    BEN_LEE_RSN_F_WHO
    BEN_LEE_RSN_RL_F_WHO
    BEN_LER_BNFT_RSTRN_CTFN_F_WHO
    BEN_LER_BNFT_RSTRN_F_WHO
    BEN_LER_CHG_DPNT_CVG_CTFN_WHO
    BEN_LER_CHG_DPNT_CVG_F_WHO
    OBJECT_NAME
    BEN_LER_CHG_OIPL_ENRT_F_WHO
    BEN_LER_CHG_OIPL_ENRT_RL_F_WHO
    BEN_LER_CHG_PGM_ENRT_F_WHO
    BEN_LER_CHG_PLIP_ENRT_F_WHO
    BEN_LER_CHG_PLIP_ENRT_RL_F_WHO
    BEN_LER_CHG_PL_NIP_ENRT_F_WHO
    BEN_LER_CHG_PL_NIP_RL_F_WHO
    BEN_LER_CHG_PTIP_ENRT_F_WHO
    BEN_LER_ENRT_CTFN_F_WHO
    BEN_LER_EXTRA_INFO_WHO
    BEN_LER_F_WHO
    OBJECT_NAME
    HR_FORM_PROPERTIES_WHO
    HR_FORM_TAB_PAGES_B_WHO
    BEN_LER_INFO_TYPES_WHO
    BEN_LER_PER_INFO_CS_LER_F_WHO
    BEN_LER_RLTD_PER_CS_LER_F_WHO
    BEN_LER_RQRS_ENRT_CTFN_F_WHO
    BEN_LE_CLSN_N_RSTR_WHO
    BEN_LGL_ENTY_RT_F_WHO
    HR_FORM_TAB_STACKED_CANVAS_WHO
    BEN_LOS_RT_F_WHO
    HR_FORM_TEMPLATES_B_WHO
    OBJECT_NAME
    BEN_NO_OTHR_CVG_RT_F_WHO
    BEN_OIPLIP_F_WHO
    BEN_OPT_INFO_TYPES_WHO
    BEN_ORG_UNIT_PRDCT_F_WHO
    BEN_PAIRD_RT_F_WHO
    BEN_PCT_FL_TM_FCTR_WHO
    BEN_PCT_FL_TM_RT_F_WHO
    BEN_PER_CM_F_WHO
    BEN_PER_CM_F_RBV_WHO
    BEN_PL_GD_R_SVC_CTFN_F_WHO
    BEN_PL_PCP_WHO
    OBJECT_NAME
    BEN_PL_R_OIPL_PREM_BY_MO_F_WHO
    BEN_PL_TYP_F_WHO
    BEN_POPL_ENRT_TYP_CYCL_F_WHO
    BEN_POPL_ORG_F_WHO
    BEN_PPL_GRP_RT_F_WHO
    HR_KI_CONTEXTS_WHO
    BEN_PRTN_ELIG_PRFL_F_WHO
    HR_KI_HIERARCHIES_WHO
    BEN_PRTT_ANTHR_PL_RT_F_WHO
    BEN_PRTT_ASSOCD_INSTN_F_WHO
    BEN_PRTT_CLM_GD_OR_SVC_TYP_WHO
    OBJECT_NAME
    BEN_PRTT_ENRT_ACTN_F_WHO
    BEN_PRTT_ENRT_CTFN_PRVDD_F_WHO
    BEN_PRTT_ENRT_RSLT_F_RBV_WHO
    BEN_PRTT_PREM_BY_MO_F_WHO
    BEN_PRTT_PREM_F_WHO
    BEN_PRTT_REIMBMT_RECON_WHO
    BEN_PRTT_REIMBMT_RQST_F_WHO
    BEN_PRTT_RMT_APRVD_FR_PYMT_WHO
    BEN_PRTT_RMT_RQST_CTFN_PRV_WHO
    BEN_PRTT_RT_VAL_CTFN_PRVDD_WHO
    BEN_PRTT_RT_VAL_RBV_WHO
    OBJECT_NAME
    BEN_PSTL_ZIP_RT_F_WHO
    BEN_PSTN_RT_F_WHO
    BEN_PTD_BAL_TYP_F_WHO
    BEN_PTD_LMT_F_WHO
    BEN_PTIP_DPNT_CVG_CTFN_F_WHO
    HR_KI_HIERARCHY_NODE_MAPS_WHO
    HR_KI_INTEGRATIONS_WHO
    BEN_PTIP_F_WHO
    BEN_PTNL_LER_FOR_PER_WHO
    BEN_PTNL_LER_FOR_PER_RBV_WHO
    BEN_PYMT_CHECK_DET_WHO
    OBJECT_NAME
    BEN_PYMT_SCHED_PY_FREQ_WHO
    BEN_PYRL_RT_F_WHO
    BEN_PY_BSS_RT_F_WHO
    BEN_QUAL_TITL_RT_F_WHO
    BEN_QUA_IN_GR_RT_F_WHO
    BEN_REGN_F_WHO
    BEN_REGN_FOR_REGY_BODY_F_WHO
    HR_KI_OPTIONS_WHO
    BEN_REPORTING_WHO
    BEN_RLTD_PER_CHG_CS_LER_F_WHO
    BEN_ROLL_REIMB_RQST_WHO
    OBJECT_NAME
    BEN_RPTG_GRP_WHO
    HR_KI_SESSIONS_WHO
    HR_KI_TOPICS_WHO
    BEN_SCHEDD_ENRT_RL_F_WHO
    BEN_SCHEDD_HRS_RT_F_WHO
    HR_LEGISLATION_INSTALLATIO_WHO
    HR_LEGISLATION_SUB_INST_WHO
    BEN_STARTUP_CM_TYP_WHO
    HR_LOCATION_INFO_TYPES_WHO
    BEN_STARTUP_LERS_WHO
    BEN_STARTUP_REGN_WHO
    OBJECT_NAME
    BEN_SVC_AREA_F_WHO
    BEN_TTL_CVG_VOL_RT_F_WHO
    BEN_TTL_PRTT_RT_F_WHO
    BEN_VRBL_RT_PRFL_RL_F_WHO
    BEN_WTHN_YR_PERD_WHO
    HR_PATTERN_EXCEPTIONS_WHO
    BEN_WV_PRTN_RSN_CTFN_PL_F_WHO
    HR_PATTERN_PURPOSES_WHO
    HR_PATTERN_PURPOSE_USAGES_WHO
    BEN_WV_PRTN_RSN_PL_F_WHO
    BEN_WV_PRTN_RSN_PTIP_F_WHO
    OBJECT_NAME
    HR_PUMP_BATCH_HEADERS_WHO
    BEN_YR_PERD_WHO
    HR_PUMP_BATCH_LINE_USER_KE_WHO
    HR_PUMP_DEFAULT_EXCEPTIONS_WHO
    HR_PUMP_MAPPING_PACKAGES_WHO
    HR_PUMP_MODULE_PARAMETERS_WHO
    HR_QUEST_ANSWERS_WHO
    HR_QUEST_ANSWER_VALUES_WHO
    HR_QUEST_FIELDS_WHO
    HR_REPORT_LOOKUPS_WHO
    HR_SOFT_CODING_KEYFLEX_WHO
    OBJECT_NAME
    HR_SOURCE_FORM_TEMPLATES_WHO
    HR_SUMMARY_WHO
    HR_TAB_PAGE_PROPERTIES_B_WHO
    HR_TEMPLATE_CANVASES_B_WHO
    HR_TEMPLATE_DATA_GROUPS_WHO
    HR_TEMPLATE_ITEMS_B_WHO
    HR_TEMPLATE_ITEM_CONTEXTS_WHO
    HR_TEMPLATE_ITEM_CONTEXT_P_WHO
    HR_TEMPLATE_ITEM_TAB_PAGES_WHO
    HR_TEMPLATE_TAB_PAGES_B_WHO
    HR_TEMPLATE_WINDOWS_B_WHO
    OBJECT_NAME
    HR_WINDOW_PROPERTIES_B_WHO
    HR_WIP_LOCKS_WHO
    HR_WIP_TRANSACTIONS_WHO
    IRC_ALL_RECRUITING_SITES_WHO
    IRC_ASSIGNMENT_STATUSES_WHO
    IRC_DEFAULT_POSTINGS_WHO
    IRC_DOCUMENTS_WHO
    IRC_NOTIFICATION_PREFERENC_WHO
    IRC_POSTING_CONTENTS_WHO
    IRC_REC_TEAM_MEMBERS_WHO
    IRC_REFERENCES_WHO
    OBJECT_NAME
    IRC_VACANCY_CONSIDERATIONS_WHO
    IRC_VAC_ALLOWED_STATUS_TYP_WHO
    IRC_VARIABLE_COMP_ELEMENTS_WHO
    PAY_ACCRUAL_BANDS_WHO
    PAY_ACTION_INFORMATION_WHO
    PAY_ALL_PAYROLLS_F_WHO
    PAY_AU_MODULES_WHO
    PAY_AU_MODULE_PARAMETERS_WHO
    PAY_AU_PROCESS_MODULES_WHO
    PAY_BALANCE_CATEGORIES_F_WHO
    PAY_BALANCE_CLASSIFICATION_WHO
    OBJECT_NAME
    PAY_BALANCE_FEEDS_F_WHO
    PAY_BALANCE_TYPES_WHO
    PAY_BAL_ATTRIBUTE_DEFINITI_WHO
    PAY_CALENDARS_WHO
    PAY_CA_EMP_FED_TAX_INFO_F_WHO
    PAY_COST_ALLOCATION_KEYFLE_WHO
    PAY_DATETRACKED_EVENTS_WHO
    PAY_DEFINED_BALANCES_WHO
    PAY_DIMENSION_ROUTES_WHO
    PAY_ELEMENT_CLASSIFICATION_WHO
    PAY_ELEMENT_SETS_WHO
    OBJECT_NAME
    PAY_ELEMENT_SPAN_USAGES_WHO
    PAY_ELEMENT_TEMPLATES_WHO
    PAY_ELEMENT_TYPE_EXTRA_INF_WHO
    PAY_ELEMENT_TYPE_INFO_TYPE_WHO
    PAY_ELEMENT_TYPE_USAGES_F_WHO
    PAY_ELE_CLASSIFICATION_RUL_WHO
    PAY_EVENT_QUALIFIERS_F_WHO
    PAY_EVENT_UPDATES_WHO
    PAY_EVENT_VALUE_CHANGES_F_WHO
    PAY_EXTERNAL_ACCOUNTS_WHO
    PAY_FORMULA_RESULT_RULES_F_WHO
    OBJECT_NAME
    PAY_FREQ_RULE_PERIODS_WHO
    PAY_FUNCTIONAL_USAGES_WHO
    PAY_GROSSUP_BAL_EXCLUSIONS_WHO
    PAY_IE_PAYE_DETAILS_F_WHO
    PAY_INPUT_VALUES_F_WHO
    PAY_ITERATIVE_RULES_F_WHO
    PAY_JP_BANKS_WHO
    PAY_LEG_SETUP_DEFAULTS_WHO
    PAY_SHADOW_BALANCE_FEEDS_WHO
    PAY_SHADOW_BALANCE_TYPES_WHO
    PAY_TRIGGER_COMPONENTS_WHO
    OBJECT_NAME
    PAY_TRIGGER_EVENTS_WHO
    PAY_USER_ROWS_F_WHO
    PAY_US_CONTRIBUTION_HISTOR_WHO
    PAY_US_GARN_FEE_RULES_F_WHO
    PAY_US_GARN_LIMIT_RULES_F_WHO
    PAY_ZA_CDV_PARAMETERS_WHO
    PER_ABSENCE_ATTENDANCES_WHO
    PER_ABSENCE_ATTENDANCE_TYP_WHO
    PER_ABS_ATTENDANCE_REASONS_WHO
    PER_ASSESSMENT_TYPES_WHO
    PER_ASSIGNMENT_BUDGET_VALU_WHO
    OBJECT_NAME
    PER_ASSIGN_PROPOSAL_ANSWER_WHO
    PER_BF_BALANCE_AMOUNTS_WHO
    PER_BF_BALANCE_TYPES_WHO
    PER_BF_PAYROLL_RUNS_WHO
    PER_BF_PROCESSED_ASSIGNMEN_WHO
    PER_BOOKINGS_WHO
    PER_BUDGETS_WHO
    PER_BUDGET_ELEMENTS_WHO
    PER_BUDGET_VALUES_WHO
    PER_BUDGET_VERSIONS_WHO
    PER_CAGR_APIS_WHO
    OBJECT_NAME
    PER_CAGR_API_PARAMETERS_WHO
    PER_CAGR_ENTITLEMENTS_WHO
    PER_CAGR_ENTITLEMENT_ITEMS_WHO
    PER_CAGR_ENTITLEMENT_LINES_WHO
    PER_CAGR_ENTITLEMENT_RESUL_WHO
    PER_CAGR_GRADES_WHO
    PER_CAGR_GRADES_DEF_WHO
    PER_CAGR_GRADE_STRUCTURES_WHO
    PER_CAGR_LOG_WHO
    PER_CAGR_RETAINED_RIGHTS_WHO
    PER_CALENDAR_ENTRIES_WHO
    OBJECT_NAME
    PER_CAL_ENTRY_VALUES_WHO
    PER_CAREER_PATHS_WHO
    PER_CAREER_PATH_ELEMENTS_WHO
    PER_CHECKLIST_ITEMS_WHO
    PER_COBRA_COVERAGE_BENEFIT_WHO
    PER_COBRA_COVERAGE_PERIODS_WHO
    PER_COBRA_COVERAGE_STATUSE_WHO
    PER_COBRA_COV_ENROLLMENTS_WHO
    PER_COBRA_DEPENDENTS_F_WHO
    PER_COBRA_QFYING_EVENTS_F_WHO
    PER_COLLECTIVE_AGREEMENTS_WHO
    OBJECT_NAME
    PER_COMPETENCES_WHO
    PER_COMPETENCE_DEFINITIONS_WHO
    PER_COMPETENCE_ELEMENTS_WHO
    PER_CONTACT_EXTRA_INFO_F_WHO
    PER_CONTACT_INFO_TYPES_WHO
    PER_CONTRACTS_F_WHO
    PER_DEPLOYMENT_FACTORS_WHO
    PER_DISABILITIES_F_WHO
    PER_ELECTIONS_WHO
    PER_ELECTION_CANDIDATES_WHO
    PER_ELECTION_CONSTITUENCYS_WHO
    OBJECT_NAME
    PER_EMPDIR_ASSIGNMENTS_WHO
    PER_EMPDIR_IMAGES_WHO
    PER_EMPDIR_PEOPLE_WHO
    PER_EMPDIR_PHONES_WHO
    PER_ESTABLISHMENTS_WHO
    PER_ESTABLISHMENT_ATTENDAN_WHO
    PER_GEN_HIERARCHY_WHO
    PER_GEN_HIERARCHY_NODES_WHO
    PER_GEN_HIERARCHY_VERSIONS_WHO
    PER_GEN_HIER_NODE_TYPES_WHO
    PER_GRADES_WHO
    OBJECT_NAME
    PER_GRADE_DEFINITIONS_WHO
    PER_GRADE_SPINES_F_WHO
    PER_HTML_TOOLKIT_REC_TYPES_WHO
    PER_JOB_EVALUATIONS_WHO
    PER_JOB_EXTRA_INFO_WHO
    PER_JOB_GROUPS_WHO
    PER_JOB_INFO_TYPES_WHO
    PER_JOB_REQUIREMENTS_WHO
    PER_JP_ADDRESS_LOOKUPS_WHO
    PER_JP_BANK_LOOKUPS_ALL_WHO
    PER_JP_POSTAL_CODES_WHO
    OBJECT_NAME
    PER_KR_GRADES_WHO
    PER_KR_GRADE_AMOUNT_F_WHO
    PER_KR_G_POINTS_WHO
    PER_KR_G_POINT_AMOUNT_F_WHO
    PER_MEDICAL_ASSESSMENTS_WHO
    PER_MM_VALID_GRADES_WHO
    PER_NL_ABSENCE_ACTIONS_WHO
    PER_OBJECTIVES_WHO
    PER_ORGANIZATION_STRUCTURE_WHO
    PER_ORG_HRCHY_SUMMARY_WHO
    PER_ORG_STRUCTURE_ELEMENTS_WHO
    OBJECT_NAME
    PER_PARENT_SPINES_WHO
    PER_PARTICIPANTS_WHO
    PER_PAY_BASES_WHO
    PER_PAY_PROPOSAL_COMPONENT_WHO
    PER_PEOPLE_INFO_TYPES_WHO
    PER_PERFORMANCE_RATINGS_WHO
    PER_PERFORMANCE_REVIEWS_WHO
    PER_PERIODS_OF_PLACEMENT_WHO
    PER_PERSON_DLVRY_METHODS_WHO
    PER_PERSON_TYPE_USAGES_F_WHO
    PER_POSITION_DEFINITIONS_WHO
    OBJECT_NAME
    PER_POSITION_EXTRA_INFO_WHO
    PER_POSITION_INFO_TYPES_WHO
    PER_POS_STRUCTURE_ELEMENTS_WHO
    PER_POS_STRUCTURE_VERSIONS_WHO
    PER_PREVIOUS_EMPLOYERS_WHO
    PER_PREVIOUS_JOBS_WHO
    PER_PREVIOUS_JOB_USAGES_WHO
    PER_PREV_JOB_EXTRA_INFO_WHO
    PER_PREV_JOB_INFO_TYPES_WHO
    PER_PROPOSAL_CATEGORY_MEMB_WHO
    PER_PROPOSAL_CATEGORY_TYPE_WHO
    OBJECT_NAME
    PER_PROPOSAL_OFFER_PARAGRA_WHO
    PER_PROPOSAL_QUESTIONS_ADV_WHO
    PER_PROPOSAL_QUESTION_MEMB_WHO
    PER_PROPOSAL_QUESTION_TYPE_WHO
    PER_PROPOSAL_TEMPLATES_WHO
    PER_QUALIFICATION_TYPES_WHO
    PER_QUICKPAINT_INVOCATIONS_WHO
    PER_QUICKPAINT_RESULT_TEXT_WHO
    PER_RATING_LEVELS_WHO
    PER_RATING_SCALES_WHO
    PER_RECRUITMENT_ACTIVITIES_WHO
    OBJECT_NAME
    PER_RECRUITMENT_ACTIVITY_F_WHO
    PER_REQUISITIONS_WHO
    PER_RI_DEPENDENCIES_WHO
    PER_RI_REQUESTS_WHO
    PER_RI_SETUP_SUB_TASKS_WHO
    PER_RI_SETUP_TASKS_WHO
    PER_RI_VIEW_REPORTS_WHO
    PER_RI_WORKBENCH_ITEMS_WHO
    PER_ROLES_WHO
    PER_SCHED_COBRA_PAYMENTS_WHO
    PER_SECONDARY_ASS_STATUSES_WHO
    OBJECT_NAME
    PER_SECURITY_ORGANIZATIONS_WHO
    PER_SECURITY_PROFILES_WHO
    PER_SECURITY_USERS_WHO
    PER_SHARED_TYPES_WHO
    PER_SOLUTION_TYPES_WHO
    PER_SPINAL_POINT_PLACEMENT_WHO
    PER_STANDARD_HOLIDAYS_WHO
    PER_TIME_PERIODS_WHO
    PER_TIME_PERIOD_RULES_WHO
    PER_WORK_INCIDENTS_WHO
    PER_ZA_AREAS_OF_ASSESSMENT_WHO
    OBJECT_NAME
    PER_ZA_QUALIFICATION_TYPES_WHO
    PER_ZA_TRAINING_WHO
    PQH_ATTRIBUTE_RANGES_WHO
    PQH_BDGT_CMMTMNT_ELMNTS_WHO
    PQH_BUDGET_DETAILS_WHO
    PQH_BUDGET_ELEMENTS_WHO
    PQH_BUDGET_FUND_SRCS_WHO
    PQH_BUDGET_GL_FLEX_MAPS_WHO
    PQH_BUDGET_POOLS_WHO
    PQH_BUDGET_SETS_WHO
    PQH_COPY_ENTITY_CONTEXTS_WHO
    OBJECT_NAME
    PQH_COPY_ENTITY_FUNCTIONS_WHO
    PQH_COPY_ENTITY_TXNS_WHO
    PQH_CORPS_DEFINITIONS_WHO
    PQH_DE_CASE_GROUPS_WHO
    PQH_DE_ENT_MINUTES_WHO
    PQH_DE_OPERATION_GROUPS_WHO
    PQH_DE_RESULT_SETS_WHO
    PQH_DE_WRKPLC_VLDTN_JOBS_WHO
    PQH_DE_WRKPLC_VLDTN_LVLNUM_WHO
    PQH_DE_WRKPLC_VLDTN_VERS_WHO
    PQH_DFLT_BUDGET_ELEMENTS_WHO
    OBJECT_NAME
    PQH_DFLT_BUDGET_SETS_WHO
    PQH_DOCUMENT_ATTRIBUTES_F_WHO
    PQH_ELEMENT_COMMITMENTS_WHO
    PQH_FR_GLOBAL_INDICES_F_WHO
    PQH_FR_STAT_SITUATION_RULE_WHO
    PQH_FR_VALIDATIONS_WHO
    PQH_GL_INTERFACE_WHO
    PQH_PROCESS_LOG_WHO
    PQH_ROLE_INFO_TYPES_WHO
    PQH_SS_STEP_HISTORY_WHO
    PQH_TABLE_ROUTE_WHO
    OBJECT_NAME
    PQH_TEMPLATES_WHO
    PQH_TEMPLATE_ATTRIBUTES_WHO
    PQH_TJR_SHADOW_WHO
    PQH_TRANSACTION_CATEGORIES_WHO
    PQH_TRANSACTION_TEMPLATES_WHO
    PQH_TXN_CATEGORY_ATTRIBUTE_WHO
    PQH_TXN_CATEGORY_DOCUMENTS_WHO
    PQH_TXN_JOB_REQUIREMENTS_WHO
    PQH_WIZARD_CANVASES_WHO
    PQH_WORKSHEETS_WHO
    PQH_WORKSHEET_BDGT_ELMNTS_WHO
    OBJECT_NAME
    PQH_WORKSHEET_BUDGET_SETS_WHO
    PQH_WORKSHEET_DETAILS_WHO
    PQH_WORKSHEET_FUND_SRCS_WHO
    PQH_WORKSHEET_PERIODS_WHO
    PQP_ALIEN_STATE_TREATIES_F_WHO
    PQP_ALIEN_TRANSACTION_DATA_WHO
    PQP_ANALYZED_ALIEN_DATA_WHO
    PQP_ANALYZED_ALIEN_DETAILS_WHO
    PQP_ASSIGNMENT_ATTRIBUTES_WHO
    PQP_CONFIGURATION_VALUES_WHO
    PQP_EXCEPTION_REPORTS_WHO
    OBJECT_NAME
    PQP_EXCEPTION_REPORT_GROUP_WHO
    PQP_EXCEPTION_REPORT_SUFFI_WHO
    PQP_EXTRACT_ATTRIBUTES_WHO
    PQP_EXT_CROSS_PERSON_RECOR_WHO
    PQP_GAP_ABSENCE_PLANS_WHO
    PQP_GAP_DAILY_ABSENCES_WHO
    PQP_PENSION_TYPES_F_WHO
    PQP_SERVICE_HISTORY_PERIOD_WHO
    PQP_VEHICLE_ALLOCATIONS_F_WHO
    PQP_VEHICLE_DETAILS_WHO
    PQP_VEHICLE_REPOSITORY_F_WHO
    OBJECT_NAME
    PQP_VEH_ALLOC_EXTRA_INFO_WHO
    PQP_VEH_ALLOC_INFO_TYPES_WHO
    PQP_VEH_REPOS_EXTRA_INFO_WHO
    PQP_VEH_REPOS_INFO_TYPES_WHO
    PAY_EXTERNAL_ACCOUNTS_ARUD
    PAY_ORG_PAYMENT_METHODS_F_ARIU
    PAY_US_EMP_STATE_TAX_RULES_WHO
    PER_PHONES_WHO
    PER_TIME_PERIOD_TYPES_WHO
    HXC_ALIAS_DEFINITION012501_WHO
    OTA_ACTIVITY_DEFINIT012501_WHO
    OBJECT_NAME
    OTA_ACTIVITY_VERSION012509_WHO
    HXC_TRANSACTION_DETAILS_BK_WHO
    OTA_OFFERINGS_TL012523_WHO
    HRI_ARCHIVE_EVENTS_WHO
    OTA_SUPPLIABLE_RESOU012528_WHO
    BEN_ACTN_TYP_TL012642_WHO
    DT_DATE_PROMPTS_TL012652_WHO
    DT_TITLE_PROMPTS_TL012653_WHO
    BEN_CM_TYP_F_TL012655_WHO
    GHR_COMPL_AGENCY_COSTS_WHO
    GHR_MASS_SALARY_CRITERIA_E_WHO
    OBJECT_NAME
    GHR_MTS_TEMP_WHO
    HR_ALL_ORGANIZATION_012716_WHO
    HR_ALL_POSITIONS_F_T012716_WHO
    HR_FORM_CANVASES_TL012728_WHO
    HR_FORM_DATA_GROUPS_012729_WHO
    BEN_EXT_FLD_TL012730_WHO
    HR_FORM_ITEMS_TL012730_WHO
    HR_FORM_TAB_PAGES_TL012731_WHO
    HR_FORM_TEMPLATES_TL012732_WHO
    HR_FORM_WINDOWS_TL012733_WHO
    HR_ITEM_PROPERTIES_T012734_WHO
    OBJECT_NAME
    HR_KI_HIERARCHIES_TL012736_WHO
    BEN_LER_F_TL012736_WHO
    HR_KI_INTEGRATIONS_T012736_WHO
    HR_KI_OPTION_TYPES_T012737_WHO
    HR_KI_TOPICS_TL012738_WHO
    HR_NAVIGATION_UNITS_012740_WHO
    HR_ORG_INFORMATION_T012741_WHO
    HR_TAB_PAGE_PROPERTI012748_WHO
    BEN_RPTG_GRP_TL012758_WHO
    PAY_CUSTOM_RESTRICTI012802_WHO
    PAY_ORG_PAYMENT_METH012813_WHO
    OBJECT_NAME
    PER_ASSIGNMENT_STATU012832_WHO
    PER_COMPETENCES_TL012842_WHO
    PAY_ACTION_PARAMETERS_ARI
    PAY_ACTION_PARAMETERS_ARU
    PA_MAITN_ORG_UPDATE_BEST
    PA_MAITN_ORG_UPDATE_BEROW
    JA_IN_57F4_FORM_NUMBER_AIU_TRG
    XXXX_APPS_LOGON_TRG
    767 rows selected.

  • Table size exceeds Keep Pool Size (db_keep_cache_size)

    Hello,
    We have a situation where one of our applications started performing bad since last week.
    After some analysis, it was found this was due to data increase in a table that was stored in KEEP POOL.
    After the data increase, the table size exceeded db_keep_cache_size.
    I was of the opinion that in such cases KEEP POOL will still be used but the remaining data will be brought in as needed from the table.
    But, I ran some tests and found it is not the case. If the table size exceeds db_keep_cache_size, then KEEP POOL is not used at all.
    Is my inference correct here ?
    SQL> select * from v$version;
    BANNER
    Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
    PL/SQL Release 11.2.0.2.0 - Production
    CORE    11.2.0.2.0      Production
    TNS for Linux: Version 11.2.0.2.0 - Production
    NLSRTL Version 11.2.0.2.0 - ProductionSetup
    SQL> show parameter keep                    
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 4M
    SQL>
    SQL>     
    SQL> create table t1 storage (buffer_pool keep) as select * from all_objects union all select * from all_objects;
    Table created.
    SQL> set autotrace on
    SQL>
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    PL/SQL procedure successfully completed.
    SQL> set serveroutput on
    SQL> exec print_table('select * from user_segments where segment_name = ''T1''');
    SEGMENT_NAME                  : T1
    PARTITION_NAME                :
    SEGMENT_TYPE                  : TABLE
    SEGMENT_SUBTYPE               : ASSM
    TABLESPACE_NAME               : HR_TBS
    BYTES                         : 16777216
    BLOCKS                        : 2048
    EXTENTS                       : 31
    INITIAL_EXTENT                : 65536
    NEXT_EXTENT                   : 1048576
    MIN_EXTENTS                   : 1
    MAX_EXTENTS                   : 2147483645
    MAX_SIZE                      : 2147483645
    RETENTION                     :
    MINRETENTION                  :
    PCT_INCREASE                  :
    FREELISTS                     :
    FREELIST_GROUPS               :
    BUFFER_POOL                   : KEEP
    FLASH_CACHE                   : DEFAULT
    CELL_FLASH_CACHE              : DEFAULT
    PL/SQL procedure successfully completed.DB_KEEP_CACHE_SIZE=4M
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              9  recursive calls
              0  db block gets
           2006  consistent gets
           2218  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=10M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=10M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 12M
    SQL>
    SQL> set autotrace on
    SQL>
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1940  consistent gets
           1937  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedDB_KEEP_CACHE_SIZE=20M
    SQL> connect / as sysdba
    Connected.
    SQL>
    SQL> alter system set db_keep_cache_size=20M scope=both;
    System altered.
    SQL>
    SQL> connect hr/hr@orcl
    Connected.
    SQL>
    SQL> show parameter keep
    NAME                                 TYPE        VALUE
    buffer_pool_keep                     string
    control_file_record_keep_time        integer     7
    db_keep_cache_size                   big integer 20M
    SQL> set autotrace on
    SQL> select count(*) from t1;
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
           1656  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processed
    SQL> /
      COUNT(*)
        135496
    Execution Plan
    Plan hash value: 3724264953
    | Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
    |   0 | SELECT STATEMENT   |      |     1 |   538   (1)| 00:00:07 |
    |   1 |  SORT AGGREGATE    |      |     1 |            |          |
    |   2 |   TABLE ACCESS FULL| T1   |   126K|   538   (1)| 00:00:07 |
    Note
       - dynamic sampling used for this statement (level=2)
    Statistics
              0  recursive calls
              0  db block gets
           1943  consistent gets
              0  physical reads
              0  redo size
            424  bytes sent via SQL*Net to client
            419  bytes received via SQL*Net from client
              2  SQL*Net roundtrips to/from client
              0  sorts (memory)
              0  sorts (disk)
              1  rows processedOnly with 20M db_keep_cache_size I see no physical reads.
    Does it mean that if the db_keep_cache_size < table size, there is no caching for that table ?
    Or am I missing something ?
    Rgds,
    Gokul

    Hello Jonathan,
    Many thanks for your response.
    Here is the test I ran;
    SQL> select buffer_pool,blocks from dba_tables where owner = 'HR' and table_name = 'T1';
    BUFFER_     BLOCKS
    KEEP          1977
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
          1939
    SQL> show parameter db_keep_cache_size
    NAME                                 TYPE        VALUE
    db_keep_cache_size                   big integer 20M
    SQL>
    SQL> alter system set db_keep_cache_size = 5M scope=both;
    System altered.
    SQL> select count(*) from hr.t1;
      COUNT(*)
        135496
    SQL> select count(*) from v$bh where objd = (select data_object_id from dba_objects where owner = 'HR' and object_name = 'T1');
      COUNT(*)
           992I think my inference is wrong and as you said I am indeed seeing the effect of tail end flushing the start of the table.
    Rgds,
    Gokul

  • Checking HANA Table sizes in SAP PO

    Hello All,
    We have SAP PO 7.4 deployed on SAP HANA SP 09, and the SAP PO is growing fast, and it grew by 200 GB over the last 2 days. The current size of the data volume is showing almost 500 GB just for the PO system. This is HANA tenant database installation.
    The total memory used by this system show about 90 GB RAM.  However, I just don't how to get the list of the table with SIZE. I looked the view CS_M_TABLE, which is showing all the tables which are using the memory, but that does not still add up though. I need to get the list of all the physical table size so I can understand to see which table is growing fast and try to come with something that would explain why we are seeing about 500 GB of database size for a SAP Java PO System.
    Thanks for all the help.
    Kumar

    Hello,
    As very simple bit of SQL that you can adapt to your needs.
    select table_name, round(table_size/1024/1024) as MB, table_type FROM SYS.M_TABLES where table_size/1024/1024 > 1000 order by table_size desc
    select * from M_CS_TABLES where memory_size_in_total > 1000 order by memory_size_in_total desc
    It's just a basic way of looking at things but at least it will give you all tables greater than 1GB.
    I would imagine others will probably come up with something a bit more eloquent and perhaps better adapted to your needs.
    Cheers,
    A.

  • LOAD vs OVERWRITE INSERT PRODUCE DIFFERENT TABLE SIZE!!

    Hi again!
    Here is another issue I don't understand why happens:
    The size of the table doubles if I load the data with INSERT OVERWRITE vs LOAD. As follows is an illustration of the problem:
    I created a table "item". Loaded the data from item.dat (aprox 28MB). After that what happens is that the file item.dat will be moved to hive/warehouse and off course the size remains the same
    Now if I create another table "item2" same as item and then load the data from item to item2 with the following command:
    INSERT OVERWRITE TABLE item2 SELECT * FROM item
    the size of table item2 is double of item (aprox 55MB)
    Why does this happen? And is there any way to avoid it?
    And the situation escalates as the size of the data grows.
    ps. this is only to illustrate the problem. In practice I am interested for pre-joining tables but INSERT OVERWRITE increases the size of the joined table drastically (Actual problem: 4GB joined with 28MB gives 18GB)
    Thank you!

    Latest update on the issue:
    I tested on cloudera as well and it is the same behavior: As follows are some details after running describe formatted <table_name> :
    item (with LOAD):
    33 Table Type: MANAGED_TABLE NULL
    34 Table Parameters: NULL NULL
    35 COLUMN_STATS_ACCURATE true
    36 numFiles 1
    37 numRows 0
    38 rawDataSize 0
    39 totalSize 28855325
    40 transient_lastDdlTime 1427988576
    41 NULL NULL
    42 # Storage Information NULL NULL
    43 SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe NULL
    44 InputFormat: org.apache.hadoop.mapred.TextInputFormat NULL
    45 OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat NULL
    46 Compressed: No NULL
    item2 (with INSERT OVERWRITE):
    33 Table Type: MANAGED_TABLE NULL
    34 Table Parameters: NULL NULL
    35 COLUMN_STATS_ACCURATE true
    36 numFiles 1
    37 numRows 102000
    38 rawDataSize 52058005
    39 totalSize 52160005
    40 transient_lastDdlTime 1427990208
    41 NULL NULL
    42 # Storage Information NULL NULL
    43 SerDe Library: org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe NULL
    44 InputFormat: org.apache.hadoop.mapred.TextInputFormat NULL
    45 OutputFormat: org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat NULL
    46 Compressed: No NULL
    And now I don’t understand why the number of rows is 0 for item. When I query it returns 102000 rows

  • How to reduce table size after deleting data in table

    In one of the environment, we have 300 GB tabe which contains 50 columns. some of the columns are large object columns. This table contains data for the past one year and every month it is growing by 40 gb data. Due to this we have space issues. we would like to reduce the table size by keeping only recent two months data. What are the posiible ways to reduce the table size by keeping only 2 months data. Database version 10.2.04 on rhel 4.

    kumar wrote:
    Finally we dont have down time to do by exp/imp method.You have two problems to address:
    <ul>
    How you get from where you are now to where you want to be
    Figuring out what you want to do when you get there so that you can stay there.
    </ul>
    Technically a simple strategy to "delete all data more than 64 days old" could be perfect - once you've got your table (and lob segments) down to the correct size for two months of data. If you've got the licencing and can use local indexing it might be even better to use (for example) daily partitioning by date.
    To GET to the 2-month data set you need to do something big and nasty - this will probably give you the choice between blocking access for a while and getting the job done relatively quickly (e.g. CTAS) or leaving the system run slowly for a relatively long time while generating huge amounts of redo. (e.g. delete 10 months of data, then shrink / compact). You also have a choice between using NO extra space to get the job done (shrink/compact) or doing something which effectively copies the last two months of data.
    Think about the side effects you're prepared to run with, then we can tell you which options might be most appropriate.
    Regards
    Jonathan Lewis

  • Actual tables size is different than tablespace,datafile size

    Hi,
    I had created 10 tables with minimum extents of 256M in the same tablespace. The total size was 2560M. After 3 months run, all tables sizes were not increased over 256M. But the datafile size for that tablespace was increased sharply to 20G.
    I spent a lot of time on it and could not find anything wrong.
    Please help.
    Thanks,

    The Member Feedback forum is for suggestions and feedback for OTN Developer Services. This forum is not monitored by Oracle support or product teams and so Oracle product and technology related questions will not be answered. We recommend that you post this thread to the Oracle Technology Network (OTN) > Products > Database > Database - General forum.

  • Estimate table size for last 4 years

    Hi,
    I am on Oracle 10g
    I need to estimate a table size for the last 4 years. So what I plan to do is get a count of data in the table for last 4 years and then multiply that value by avg_row_length to get the total size for 4 years. Is this technique correct or do I need to add some overhead?
    Thanks

    Yes, the technique is correct, but it is better to account for some overhead. I usually multiply the results by 10 :)
    The most important thing to check is if there is any trend in data volumes. Was the count of records 4 years ago more or less equal to the last year? Is the business growing or steady? How fast is it growing? What are prospects for the future? Last year in not always 25% of last 4 years. It happens that last year is more than 3 other years added together.
    The other, technical issue is internal organisation of data in Oracle datafiles. The famous PCTFREE. If you expact that the data will be updated then it is much better to keep some unused space in each database block in case some of your records get larger. This is much better for performance reasons. For example, you leave 10% of each database block free and when you update your record with longer value (like replace NULL column with actual 25-characters string) then your record still fits into the same block. You should account for this and add this to your estimates.
    On the other hand, if your records get never updated and you load them in batch, then maybe they can be ORDERed before insert and you can setup a table with COMPRESS clause. Oracle COMPRESS clause has very little common with zip/gzip utilities, however it can bring you significant space savings.
    Finally, there is no point to make estimates too accurate. They are just only estimates and the reality will be almost always different. In general, it is better to overestimate and have some disk space unused than underestimate and need to have people to deal with the issue. Disks are cheap, people on the project are expensive.

  • Pipelined function with huge volume

    Hi all,
    I have a table of 5 million rows with an average length of 1K for each row (dss system).
    My SGA is 2G and PGA 1G.
    I wonder if a pipelined function could support a such volume ?
    Does anyone have already experienced a pipelined function with a huge volume ?
    TIA
    Yang

    Hello
    Well, just over a month later and we're pretty much sorted. Our pipelined functions were not the cause of the excessive memory consumption and the processes are are now no longer consuming as much PGA as they were previously. Here's what I've learnt.
    1. Direct write operations on partitioned tables require two direct write buffers to be allocated per partition. By default, these buffers are 256K each so it's 512K per partition. We had a table with 241 partitions which meant we inadvertently allocating 120MB of PGA without even trying. This is not a problem with pipelined functions.
    2. In 10.2 the total size of the buffers will be kept below the pga_aggregate_target, or 64k per buffer, whichever is higher. This is next to useless though as to really constrain the size of the buffers at all, you need a ridiculously small pga_aggregate_target.
    3. The size of the buffers can be as low as 32k and can be set using an undocumented parameter "_ldr_io_size". In our environment (10.2.0.2 Win2003 using AWE) I've set it to 32k meaning there will be 64k worth of buffers allocated to each partition significantly reducing the amount of PGA required.
    4. I never want to speak to Oracle support again. We had a severity 1 SR open for over a month and it took the development team 18 days to get round to looking at the test case I supplied. Once they'd looked at it, they came back with the undocumented parameter which worked, and the ridiculous suggestion that I set the PGA aggregate target to 50MB on a production box with 4GB and 300+ dedicated connections. No only that, they told me that a pga_aggregate_target of 50MB was sensible and did so in the most patronising way. Muppets.
    So in sum, our pipelined functions are working exceptionally well. We had some scary moments as we saw huge amounts of PGA being allocated and then 4030 memory errors but now it's sorted and chugging along nicely. The throughput is blistering especially when running in parallel - 200m rows generated and inserted in around 1 hour.
    To give some background on what we're using pipelined functions for....
    We have a list of master records that have schedules associated with them. Those schedules need to be exploded out to an hourly level and customised calendars need to be applied to them along with custom time-zone-style calculations. There are various lookups that need to be applied to the exploded schedules and a number of value calculations based on various rules. I did originally implement this in straight SQL but it was monsterous and ran like a dog. The SQL was highly complex and quite quickly became unmanageable. I decided to use pipelined functions because
    a) It immensely simplified the logic
    b) It gave us a very neat way to centralise the logic so it can be easily used by other systems - PL/SQL and SQL
    c) We can easily see what it is doing and make changes to the logic without affecting execution plans etc
    d) Its been exceptionally easy to tune using DBMS_PROFILER
    So that's that. I hope it's of use to anyone who's interested.
    I'm off to get a "pipelined fuinctions rule" tattoo on my face.
    David

  • I inserted my new 8gb 4x4 RAM but when i boot up the computer and choose this computer it only says that i have 4gb ram inserted and the computer isn't faster then with 2gb!? please help me! I have a MBP from 2009

    I ordered my RAM 8GB from OWC (other world computing) and when i inserted the ram and booted up my computer it says that i only have inserted 4gb ram and the computer isn't faster than it was with 2gb!!! please help me... I've got a MacBook Pro (mid 2009)

    Improperly seating memory is the curse of everyone, even experts.  No one wants to "break" the memory or the slot by pushing too hard.
    So, try again.  Push maybe *just a twinge* harder to seat the memory (just a twinge).
    As to speed ... memory mainly lets you keep more things active concurrently.  Unless you were severely overtaxing your old memory and causing huge "Page Out" counts, only faster HD makes huge speed increases.

  • "Convert Text to Table" Size limit issue?

    Alphabetize a List
    I’ve been using this well known work around for years.
    Select your list and in the Menu bar click Format>Table>Convert Text to Table
    Select one of the column’s cells (1st click selects entire table, 2nd click selects individual cell)
    Open “Table Inspector” (Click Table icon at top of Pages document)
    Make sure “table” button is selected, not “format” button
    Choose Sort Ascending from the Edit Rows & Columns pop-up menu
    Finally, click Format>Table>Convert Table to Text.
    A few days ago I added items & my list was 999 items long, ~22 pages.
    Tonight, I added 4 more items. Still the same # pages but now 1,003 items long.
    Unable to Convert Text to Table! Tried for 45 minutes. I think there is a list length limit, perhaps 999 items?
    I tried closing the document w/o any changes. Re-opening Pages & re-adding my new items to the end of the list as always & once again when I highlight list & Format>Table>Convert Text to Table .....nothing happens! I could highlight part of the list up to 999 items & leave the 4 new items unhighlighted & it works. I pasted the list into a new doc and copied a few items from the middle of the list & added them to the end of my new 999 list to make it 1003 items long (but different items) & did NOT work. I even attempted to add a single new item making the list an even 1000 items long & nope, not working. Even restarted iMac, no luck.
    I can get it to work with 999 or fewer items easily as always but no way when I add even a single new item.
    Anyone else have this problem?  It s/b easy to test out. If you have a list of say, 100 items, just copy & repeatedly paste into a new document multiple times to get over 1,000 & see if you can select all & then convert it from text to table.
    Thanks!
    Pages 08 v 3.03
    OS 10.6.8

    G,
    Yes, Pages has a table size limit, as you have discovered. Numbers has a much greater capacity for table length, so if you do your sort in Numbers you won't have any practical limitation.
    A better approach than switching to Numbers for the sort would be to download, install and activate Devon Wordservice. Then you could sort your list without converting it to a table.
    Jerry

  • Top n Growing tables in BW & R/3 with Oracle env

    Hi all,
    We are on BW 7.0 with Oracle 10.2.0.2 , please let me know how to get the top N growing tables & Top N largest tables.
    I remember collecting these stats from TX: DB02 when we had MS SQLserver as the DB.  It was as easy as clicking a button.  but with Oracle I have been unable to find these options in Db02 or dbacockpit.
    Thanks,
    Nick

    Nick,
    Goto tcode DB02OLD>Detailed Analysis>Object Name *, Tablespace , Objetc Type tab. You will get list of all table, you take out top 50 table from this list.
    Earlywatch report also gives list of top 20 tables. Check your earlywatch report for this.
    You can also use the following SQL query:
    select * from
    (select owner, segment_name, segment_type, tablespace_name, sum (bytes/1024/1024) MB
    from dba_extents
    where segment_type = 'TABLE'
    group by owner, segment_name, segment_type, tablespace_name
    order by MB desc)
    where rownum <= N;
    Put any value for N (e.g. 50)to find out top N growing tables & Top N largest tables.
    If you are planning to go for a table reorg then refer below link.
    Re: is it possible to see decrease of table size & size of tablespace
    Hope this helps.
    Thanks,
    Sushil

Maybe you are looking for

  • Using FH9 files in dreamweaver

    I have an .fh9 file that i want to insert in dreamweaver. I need to know if .fh9 files are the correct format to use for web pages or should a different format be used.

  • Table name for the transaction BNK_APP

    Hi,           I want to know the table name for the fields like Batch pymt no , company code , batch currency used in structure BNK_STR_BATCH_REL_APPR . Thanks & Regards

  • E1P transaction FBL5N default date

    Hi specialists ok, this seems a stupid question even to me...but I have forgotten this. When opening FBL5N in dynamic values there is a date selection. What do I have to do to keep this value always as a current date; so when I save this as a variant

  • Troubles with update iTunes on mac 10.6.8

    Cannot update iTunes on my mac 10.6.8. before it finishes the update it tells me to close iTunes, what I do, but it doesnt continue to finish it. What to do?

  • Volume   bitmove

    I'm unable to get volume in a bitmove that I have downloaded to QuickTime.   How can I get it to work ?    The video is fine, just the volume does not work. Am I missing something or do I need a different program to work this video ? Thanking all in