View in Datawarehouse environment
Hi there,
Need your inputs to decide on the approach to move more than two billion rows from one table to another. The problem statement is:
1. Source and target table structures are different. So can't use Insert into ... Select statement.
2. For every N records read from source table, 1 record goes into the target, where N can be any number less than 24. So, its basically denormalization and aggregation. Yes, both.
3. Data verification by looking up column values is required.
Approach 1:
Using ETL tool (Informatica) move the records. Perform aggregation in the Informatica. Use 'order by' in the source query. I have doubts whether the tool will be able to handle large amount of data.
Approach 2:
Create a view to perform aggregation. (Not materialized view, due to space constraints). Use ETL tool to move data.
Question is, does view help in performance improvement? As in, because Oracle will append the SQL used to create view to that used to read from view, aggregation will be done on the Database in Approach 2 as opposed to aggregation done by the tool in Approach 1.
Thanks for the help.
-N
Are both the source and target tables in Oracle? If so, I suggest Approach 3: do everything in the database. The major drawback I see with most ETL tools like Informatica, DataStage, etc., is that they usually do row-by-row processing, which can be very slow. It is possible to do very complex transformations using only SQL.
In your post you state that you can't use "Insert Into...select" because the table structures are different; in many cases, you can formulate a select statement which can produce a result set consistent with structure of the target statement. While the SQL may be somewhat complex, you will almost alwas get better performance using a set-based approach (processing many records at once) than a row-by-row approach.
I strongly suggest looking at the Data Warehousing Guide
Similar Messages
-
Help needed for hash_area_size setting for Datawarehouse environment
We have an Oracle 10g Datawarehousing environment , running on 3 - node RAC
with 16 GB RAM & 4 CPUs each and roughly we have 200 users and night jobs running on this D/W .
We find that query performance of all ETL Processes & joins are quite slow .
How much should we increase the value of hash_area_size parameter for this Datawarehouse environment ? This is a Production database, with Oracle Database 10g Enterprise Edition Release 10.1.0.5.0.
We use OWB 10g Tool for this D/W and we need to change the hash_area_size to increase the performance of the ETL Processes.
This is the Oracle init parameter settings used, as shown below : -
Kindly suggest ,
Thanks & best regards ,
===========================================================
ORBIT
__db_cache_size 1073741824
__java_pool_size 67108864
__large_pool_size 318767104
__shared_pool_size 1744830464
optimizercost_based_transformation OFF
active_instance_count
aq_tm_processes 1
archive_lag_target 0
asm_diskgroups
asm_diskstring
asm_power_limit 1
audit_file_dest /dboracle/orabase/product/10.1.0/rdbms/audit
audit_sys_operations FALSE
audit_trail NONE
background_core_dump partial
background_dump_dest /dborafiles/orbit/ORBIT01/admin/bdump
backup_tape_io_slaves TRUE
bitmap_merge_area_size 1048576
blank_trimming FALSE
buffer_pool_keep
buffer_pool_recycle
circuits
cluster_database TRUE
cluster_database_instances 3
cluster_interconnects
commit_point_strength 1
compatible 10.1.0
control_file_record_keep_time 90
control_files #NAME?
core_dump_dest /dborafiles/orbit/ORBIT01/admin/cdump
cpu_count 4
create_bitmap_area_size 8388608
create_stored_outlines
cursor_sharing EXACT
cursor_space_for_time FALSE
db_16k_cache_size 0
db_2k_cache_size 0
db_32k_cache_size 0
db_4k_cache_size 0
db_8k_cache_size 0
db_block_buffers 0
db_block_checking FALSE
db_block_checksum TRUE
db_block_size 8192
db_cache_advice ON
db_cache_size 1073741824
db_create_file_dest #NAME?
db_create_online_log_dest_1 #NAME?
db_create_online_log_dest_2 #NAME?
db_create_online_log_dest_3
db_create_online_log_dest_4
db_create_online_log_dest_5
db_domain
db_file_multiblock_read_count 64
db_file_name_convert
db_files 999
db_flashback_retention_target 1440
db_keep_cache_size 0
db_name ORBIT
db_recovery_file_dest #NAME?
db_recovery_file_dest_size 2.62144E+11
db_recycle_cache_size 0
db_unique_name ORBIT
db_writer_processes 1
dbwr_io_slaves 0
ddl_wait_for_locks FALSE
dg_broker_config_file1 /dboracle/orabase/product/10.1.0/dbs/dr1ORBIT.dat
dg_broker_config_file2 /dboracle/orabase/product/10.1.0/dbs/dr2ORBIT.dat
dg_broker_start FALSE
disk_asynch_io TRUE
dispatchers
distributed_lock_timeout 60
dml_locks 9700
drs_start FALSE
enqueue_resources 10719
event
fal_client
fal_server
fast_start_io_target 0
fast_start_mttr_target 0
fast_start_parallel_rollback LOW
file_mapping FALSE
fileio_network_adapters
filesystemio_options asynch
fixed_date
gc_files_to_locks
gcs_server_processes 2
global_context_pool_size
global_names FALSE
hash_area_size 131072
hi_shared_memory_address 0
hpux_sched_noage 0
hs_autoregister TRUE
ifile
instance_groups
instance_name ORBIT01
instance_number 1
instance_type RDBMS
java_max_sessionspace_size 0
java_pool_size 67108864
java_soft_sessionspace_limit 0
job_queue_processes 10
large_pool_size 318767104
ldap_directory_access NONE
license_max_sessions 0
license_max_users 0
license_sessions_warning 0
local_listener
lock_name_space
lock_sga FALSE
log_archive_config
log_archive_dest
log_archive_dest_1 LOCATION=+ORBT_A06635_DATA1_ASM/ORBIT/ARCHIVELOG/
log_archive_dest_10
log_archive_dest_2
log_archive_dest_3
log_archive_dest_4
log_archive_dest_5
log_archive_dest_6
log_archive_dest_7
log_archive_dest_8
log_archive_dest_9
log_archive_dest_state_1 enable
log_archive_dest_state_10 enable
log_archive_dest_state_2 enable
log_archive_dest_state_3 enable
log_archive_dest_state_4 enable
log_archive_dest_state_5 enable
log_archive_dest_state_6 enable
log_archive_dest_state_7 enable
log_archive_dest_state_8 enable
log_archive_dest_state_9 enable
log_archive_duplex_dest
log_archive_format %t_%s_%r.arc
log_archive_local_first TRUE
log_archive_max_processes 2
log_archive_min_succeed_dest 1
log_archive_start FALSE
log_archive_trace 0
log_buffer 1167360
log_checkpoint_interval 0
log_checkpoint_timeout 1800
log_checkpoints_to_alert FALSE
log_file_name_convert
logmnr_max_persistent_sessions 1
max_commit_propagation_delay 700
max_dispatchers
max_dump_file_size UNLIMITED
max_enabled_roles 150
max_shared_servers
nls_calendar
nls_comp
nls_currency #
nls_date_format DD-MON-RRRR
nls_date_language ENGLISH
nls_dual_currency ?
nls_iso_currency UNITED KINGDOM
nls_language ENGLISH
nls_length_semantics BYTE
nls_nchar_conv_excp FALSE
nls_numeric_characters
nls_sort
nls_territory UNITED KINGDOM
nls_time_format HH24.MI.SSXFF
nls_time_tz_format HH24.MI.SSXFF TZR
nls_timestamp_format DD-MON-RR HH24.MI.SSXFF
nls_timestamp_tz_format DD-MON-RR HH24.MI.SSXFF TZR
O7_DICTIONARY_ACCESSIBILITY FALSE
object_cache_max_size_percent 10
object_cache_optimal_size 102400
olap_page_pool_size 0
open_cursors 1024
open_links 4
open_links_per_instance 4
optimizer_dynamic_sampling 2
optimizer_features_enable 10.1.0.5
optimizer_index_caching 0
optimizer_index_cost_adj 100
optimizer_mode ALL_ROWS
os_authent_prefix ops$
os_roles FALSE
parallel_adaptive_multi_user TRUE
parallel_automatic_tuning TRUE
parallel_execution_message_size 4096
parallel_instance_group
parallel_max_servers 80
parallel_min_percent 0
parallel_min_servers 0
parallel_server TRUE
parallel_server_instances 3
parallel_threads_per_cpu 2
pga_aggregate_target 8589934592
plsql_code_type INTERPRETED
plsql_compiler_flags INTERPRETED
plsql_debug FALSE
plsql_native_library_dir
plsql_native_library_subdir_count 0
plsql_optimize_level 2
plsql_v2_compatibility FALSE
plsql_warnings DISABLE:ALL
pre_page_sga FALSE
processes 600
query_rewrite_enabled TRUE
query_rewrite_integrity enforced
rdbms_server_dn
read_only_open_delayed FALSE
recovery_parallelism 0
remote_archive_enable TRUE
remote_dependencies_mode TIMESTAMP
remote_listener
remote_login_passwordfile EXCLUSIVE
remote_os_authent FALSE
remote_os_roles FALSE
replication_dependency_tracking TRUE
resource_limit FALSE
resource_manager_plan
resumable_timeout 0
rollback_segments
serial_reuse disable
service_names ORBIT
session_cached_cursors 0
session_max_open_files 10
sessions 2205
sga_max_size 3221225472
sga_target 3221225472
shadow_core_dump partial
shared_memory_address 0
shared_pool_reserved_size 102760448
shared_pool_size 318767104
shared_server_sessions
shared_servers 0
skip_unusable_indexes TRUE
smtp_out_server
sort_area_retained_size 0
sort_area_size 65536
sp_name ORBIT
spfile #NAME?
sql_trace FALSE
sql_version NATIVE
sql92_security FALSE
sqltune_category DEFAULT
standby_archive_dest ?/dbs/arch
standby_file_management MANUAL
star_transformation_enabled TRUE
statistics_level TYPICAL
streams_pool_size 0
tape_asynch_io TRUE
thread 1
timed_os_statistics 0
timed_statistics TRUE
trace_enabled TRUE
tracefile_identifier
transactions 2425
transactions_per_rollback_segment 5
undo_management AUTO
undo_retention 7200
undo_tablespace UNDOTBS1
use_indirect_data_buffers FALSE
user_dump_dest /dborafiles/orbit/ORBIT01/admin/udump
utl_file_dir /orbit_serial/oracle/utl_out
workarea_size_policy AUTOThe parameters are already unset in the environment, but do show up in v$parameter, much like shared_pool_size is visible in v$parameter despite only sga_target being set.
SQL> show parameter sort
NAME TYPE VALUE
sortelimination_cost_ratio integer 5
nls_sort string binary
sort_area_retained_size integer 0
sort_area_size integer 65536
SQL> show parameter hash
NAME TYPE VALUE
hash_area_size integer 131072
SQL> exit
Only set hash_area_size and sort_area_size should only be set when not using automatic undo, which is not supported in EBS databases.
Database Initialization Parameters for Oracle Applications 11i
http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216205.1 -
Video playback issues in Captivate 6 swf when viewed in testing environment
I've created a training video from a company template in Captivate 6 and produced a swf file. It plays from my local machine and has been tested by others from our SharePoint portal. I uploaded the project through to our Team Foundation Server in order for it to be included and pushed to our testing environment. However, when viewed, no content populates; just a blank white screen.
Has anyone ever come across this or perhaps know a workaround?
Any and all help would greatly be appreciated.
Thanks-If your Team Foundation server is a LAN server and not a web server then your issue is most likely to be Flash Global Security.
Please see this page for reasons and resolutions:
http://www.infosemantics.com.au/adobe-captivate-troubleshooting/how-to-set-up-flash-global -security -
Any experiences when using AAG in a DataWarehouse environment
Hi,
I was wondering if there some people out there that have some experience with AAG in a Data Warehouse environment.
By Data Warehouse i mean SQL Server instances with a lot of data and a lot of batch jobs running overnight (extracting and loading lots of data).
How well is AAG able to work with such a system? Is it fast enough?
I have seen somewhere that the Parallel Data Warehouse edition of SQL Server doesn't support AAG. There must be a reason for this.
greetings,
Kick Vieleershttp://msdn.microsoft.com/en-us/library/jj191711.aspx
Best Regards,Uri Dimant SQL Server MVP,
http://sqlblog.com/blogs/uri_dimant/
MS SQL optimization: MS SQL Development and Optimization
MS SQL Consulting:
Large scale of database and data cleansing
Remote DBA Services:
Improves MS SQL Database Performance
SQL Server Integration Services:
Business Intelligence -
Environment Channel Strips view like the mixer window?
Is there a chance to set the environment Channel Strips view like the mixer window?
The mixer has the new graphic view whereas the environment has the old LP9 graphic viewRequesting features here... doesn't achieve much as the Dev team do not frequent this forum...
Try using the feedback form instead....
https://www.apple.com/feedback/logic-pro.html -
Cisco ISE & NAC Agent in a Vmware View VDI Environment
Hi,
Anyone deployed Cisco ISE NAC agent on a vmware view virtual desktop environment (VDI)?There are no known issues regarding VMWare view that would cause this.
For AV see -> http://www.novell.com/support/kb/doc.php?id=7007545
I find ProcMon for Sysinternals useful to see if other prcesses such as
AV are hitting those files unexpectedly. A few times I have seen AV
Exclusions not quite working as expected until tweaked.
The ZMD-Messages.log may show if the agent is doing something....
On 9/30/2014 9:36 PM, harrymsg wrote:
>
> We have been running 11.2.4 in our View VDI environment and overall been
> very successful. We just rolled Win 7 and are seeing approx. 10% of the
> VMs with the zenworkswindowsservice.exe running steadily around 50% for
> hours. Any thoughts? One thing I just set to try was excluding that
> from Microsoft FEP AV. Anything other thoughts to resolve? Thanks.
>
>
Going to Brainshare 2014?
http://www.brainshare.com
Use Registration Code "nvlcwilson" for $300 off!
Craig Wilson - MCNE, MCSE, CCNA
Novell Technical Support Engineer
Novell does not officially monitor these forums.
Suggestions/Opinions/Statements made by me are solely my own.
These thoughts may not be shared by either Novell or any rational human. -
Where's that "Auto-Manage' Environment view setting?
I can't find it in the manual or the program, but I know it's there somewhere...
Where the heck's that little checkbox to make the Arrange page automatically manage new strips in the environment?
You'd think it'd be trackmixer/options... Nope. Trackmixer/view - nope. Environment window/view Nope. Environment window/options - nope. Preferences/display - nope.
Ya ever notice how many redundant menus this behemoth has? Hadn't bugged me much, till now.Fab - thanks. Musta looked right past it in my hurry -- never expected it to be under 'audio' settings. But I guess it kinda sorta makes sense. Maybe.
You'd really think this would be in at least one of the mixer or environment "view" or "options" menus. Since it mostly deals with viewing options in the mixer. But hey - that's too Logical, innit!
thanks again. -
Issue with creating materialized view
Hi,
We have a select query (containing joins, aggregates and UNION ALL’s) using which we are creating materialized views. We were able to create these mat views in development environment, however when tried to run the same scripts in a higher environment the creation never completes. The higher environment has three times more data than in Dev currently.
The below operations complete well in time , but when we add “CREATE MATERIALIZED VIEW MAT_VIEW_NAME” to the select query it takes forever (we have cancelled the operation after waiting for more 1 hour)
Select count(1) from the complete Mat. View query - takes 3.2 min to complete - the query resullts in 3,010,068 rows
Create Normal VIEW using complete Mat. View select query - takes 3.06 sec to complete
Create table using complete Mat. View select query takes 5.75 min to complete - the query resullts in 3,010,068
Does anyone have an idea why this could be happening ? if you have ever faced this kind of issue, can you please provide pointers on how you were able to solve the problem. We are using Oracle 11g.
Let me know if I have to provide any other information for you to understand the issue better.
ThanksSELECT vis.uid, findet.yr, findet.ect, vis.ind,
tm_view.col1_id, tm_view.col1_name,
tm_view.col2_id, tm_view.col2_name,
tm_view.col3_id, tm_view.col3_name,
clnt.cl_id, clnt.cl_nm,
prodparent_view.parent_cd,
prodparent_view.parent_desc,
prod_view.parent_cd,
prod_view.parent_desc,
prod_view.child_cd,
prod_view.child_desc,
SUM (value1), SUM (value2),
SUM (value3), SUM (value4),
SUM (value5), SUM (value6),
SUM (value7), SUM (value8),
SUM (value9), SUM (value10),
SUM (value11), SUM (value12)
FROM vis,
(SELECT *
FROM analytic_e,
(SELECT table_val
FROM TAB_CHECK s
WHERE s.tgt_table_nm = 'ANALYTIC'
AND s.table_val = 'ANALYTIC_E')
WHERE table_val = 'ANALYTIC_E'
UNION ALL
SELECT *
FROM analytic_o,
(SELECT switch_val
FROM tab_check s
WHERE s.tgt_table_nm = 'ANALYTIC'
AND s.switch_val = 'ANALYTIC_O')
WHERE switch_val = 'ANALYTIC_O') findet,
prod_view,
prodparent_view,
tm_view,
clnt,
(select to_number(to_char(ref_dt,'yyyy'))-1 year_agg from DATE_TABLE) tbabt
WHERE tbabt.yr = findet.yr
AND vis.cl_key = findet.cl_key
AND tm_view.hi_key = findet.hi_key
AND prod_view.child_cd = findet.prod_cd
AND clnt.cl_key = findet.cl_key
AND prodparent_view.child_cd = prod_view.parent_cd
GROUP BY vis.uid, findet.yr, findet.ect, vis.ind,
tm_view.col1_id, tm_view.col1_name,
tm_view.col2_id, tm_view.col2_name,
tm_view.col3_id, tm_view.col3_name,
clnt.cl_id, clnt.cl_nm,
prodparent_view.parent_cd
prodparent_view.parent_desc
prod_view.parent_cd
prod_view.parent_desc
prod_view.child_cd
prod_view.child_desc
Higher Environment
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 57M| 65G| | 20M (1)| 66:48:28 | | |
| 1 | HASH GROUP BY | | 57M| 65G| 73G| 20M (1)| 66:48:28 | | |
|* 2 | HASH JOIN | | 57M| 65G| | 109K (2)| 00:21:58 | | |
| 3 | TABLE ACCESS BY INDEX ROWID | HIER | 2100 | 244K| | 172 (0)| 00:00:03 | | |
|* 4 | INDEX RANGE SCAN | UK_HIER | 2100 | | | 16 (0)| 00:00:01 | | |
|* 5 | HASH JOIN | | 57M| 59G| | 109K (1)| 00:21:52 | | |
| 6 | VIEW | VW_GBF_25 | 1908 | 868K| | 2612 (1)| 00:00:32 | | |
| 7 | HASH GROUP BY | | 1908 | 141K| | 2612 (1)| 00:00:32 | | |
| 8 | VIEW | | 45107 | 3347K| | 2609 (1)| 00:00:32 | | |
| 9 | UNION-ALL | | | | | | | | |
| 10 | HASH UNIQUE | | 22518 | 1473K| 1872K| 1010 (1)| 00:00:13 | | |
|* 11 | TABLE ACCESS FULL | HIER | 22518 | 1473K| | 650 (1)| 00:00:08 | | |
| 12 | HASH UNIQUE | | 22518 | 1165K| 1512K| 947 (1)| 00:00:12 | | |
|* 13 | TABLE ACCESS FULL | HIER | 22518 | 1165K| | 650 (1)| 00:00:08 | | |
| 14 | HASH UNIQUE | | 71 | 1917 | | 652 (1)| 00:00:08 | | |
|* 15 | TABLE ACCESS FULL | HIER | 22518 | 593K| | 650 (1)| 00:00:08 | | |
|* 16 | HASH JOIN | | 64M| 38G| 4936K| 106K (1)| 00:21:16 | | |
| 17 | VIEW | | 45107 | 4404K| | 2609 (1)| 00:00:32 | | |
| 18 | UNION-ALL | | | | | | | | |
| 19 | HASH UNIQUE | | 22518 | 1473K| 1872K| 1010 (1)| 00:00:13 | | |
|* 20 | TABLE ACCESS FULL | HIER | 22518 | 1473K| | 650 (1)| 00:00:08 | | |
| 21 | HASH UNIQUE | | 22518 | 1165K| 1512K| 947 (1)| 00:00:12 | | |
|* 22 | TABLE ACCESS FULL | HIER | 22518 | 1165K| | 650 (1)| 00:00:08 | | |
| 23 | HASH UNIQUE | | 71 | 1917 | | 652 (1)| 00:00:08 | | |
|* 24 | TABLE ACCESS FULL | HIER | 22518 | 593K| | 650 (1)| 00:00:08 | | |
|* 25 | HASH JOIN | | 3021K| 1550M| 15M| 24492 (1)| 00:04:54 | | |
| 26 | PARTITION HASH ALL | | 491K| 10M| | 1059 (1)| 00:00:13 | 1 | 16 |
| 27 | MAT_VIEW ACCESS FULL | VIS | 491K| 10M| | 1059 (1)| 00:00:13 | 1 | 16 |
|* 28 | HASH JOIN | | 388K| 190M| 6056K| 12929 (1)| 00:02:36 | | |
| 29 | TABLE ACCESS FULL | CLNT | 64540 | 5294K| | 411 (1)| 00:00:05 | | |
|* 30 | HASH JOIN | | 388K| 159M| | 4072 (1)| 00:00:49 | | |
| 31 | TABLE ACCESS FULL | DATE_TABLE | 2 | 16 | | 3 (0)| 00:00:01 | | |
| 32 | VIEW | | 582K| 235M| | 4065 (1)| 00:00:49 | | |
| 33 | UNION-ALL | | | | | | | | |
| 34 | NESTED LOOPS | | 272K| 52M| | 1860 (1)| 00:00:23 | | |
|* 35 | TABLE ACCESS BY INDEX ROWID| TAB_CHECK | 1 | 46 | | 1 (0)| 00:00:01 | | |
|* 36 | INDEX UNIQUE SCAN | SYS_C0041157 | 1 | | | 0 (0)| 00:00:01 | | |
| 37 | PARTITION RANGE ALL | | 272K| 40M| | 1859 (1)| 00:00:23 | 1 |1048575|
| 38 | TABLE ACCESS FULL | ANALYTIC_E | 272K| 40M| | 1859 (1)| 00:00:23 | 1 |1048575|
| 39 | NESTED LOOPS | | 309K| 58M| | 2205 (1)| 00:00:27 | | |
|* 40 | TABLE ACCESS BY INDEX ROWID| TAB_CHECK | 1 | 46 | | 1 (0)| 00:00:01 | | |
|* 41 | INDEX UNIQUE SCAN | SYS_C0041157 | 1 | | | 0 (0)| 00:00:01 | | |
| 42 | PARTITION RANGE ALL | | 309K| 44M| | 2204 (1)| 00:00:27 | 1 |1048575|
| 43 | TABLE ACCESS FULL | ANALYTIC_O | 309K| 44M| | 2204 (1)| 00:00:27 | 1 |1048575|
Development
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)| Time | Pstart| Pstop |
| 0 | SELECT STATEMENT | | 1696K| 1276M| | 291K (1)| 00:58:20 | | |
| 1 | HASH GROUP BY | | 1696K| 1276M| 1325M| 291K (1)| 00:58:20 | | |
|* 2 | HASH JOIN | | 1696K| 1276M| | 9721 (2)| 00:01:57 | | |
| 3 | VIEW | | 15464 | 1132K| | 1855 (1)| 00:00:23 | | |
| 4 | UNION-ALL | | | | | | | | |
| 5 | HASH UNIQUE | | 7681 | 502K| | 618 (1)| 00:00:08 | | |
|* 6 | TABLE ACCESS FULL | HIER | 7681 | 502K| | 617 (1)| 00:00:08 | | |
| 7 | HASH UNIQUE | | 7681 | 375K| | 618 (1)| 00:00:08 | | |
|* 8 | TABLE ACCESS FULL | HIER | 7681 | 375K| | 617 (1)| 00:00:08 | | |
| 9 | HASH UNIQUE | | 102 | 2448 | | 618 (1)| 00:00:08 | | |
|* 10 | TABLE ACCESS FULL | HIER | 7681 | 180K| | 617 (1)| 00:00:08 | | |
|* 11 | HASH JOIN | | 371K| 252M| | 7847 (2)| 00:01:35 | | |
| 12 | VIEW | | 15464 | 1510K| | 1855 (1)| 00:00:23 | | |
| 13 | UNION-ALL | | | | | | | | |
| 14 | HASH UNIQUE | | 7681 | 502K| | 618 (1)| 00:00:08 | | |
|* 15 | TABLE ACCESS FULL | HIER | 7681 | 502K| | 617 (1)| 00:00:08 | | |
| 16 | HASH UNIQUE | | 7681 | 375K| | 618 (1)| 00:00:08 | | |
|* 17 | TABLE ACCESS FULL | HIER | 7681 | 375K| | 617 (1)| 00:00:08 | | |
| 18 | HASH UNIQUE | | 102 | 2448 | | 618 (1)| 00:00:08 | | |
|* 19 | TABLE ACCESS FULL | HIER | 7681 | 180K| | 617 (1)| 00:00:08 | | |
|* 20 | HASH JOIN | | 122K| 71M| | 5987 (2)| 00:01:12 | | |
|* 21 | TABLE ACCESS FULL | HIER | 7681 | 915K| | 617 (1)| 00:00:08 | | |
|* 22 | HASH JOIN | | 122K| 57M| 4512K| 5368 (2)| 00:01:05 | | |
|* 23 | HASH JOIN | | 9556 | 4395K| 3856K| 2409 (2)| 00:00:29 | | |
| 24 | TABLE ACCESS FULL | CLNT | 74426 | 2979K| | 310 (1)| 00:00:04 | | |
|* 25 | HASH JOIN | | 9556 | 4012K| | 1710 (2)| 00:00:21 | | |
| 26 | TABLE ACCESS FULL | DATE_TABLE | 1 | 7 | | 3 (0)| 00:00:01 | | |
| 27 | VIEW | | 19112 | 7894K| | 1706 (2)| 00:00:21 | | |
| 28 | UNION-ALL | | | | | | | | |
| 29 | MERGE JOIN CARTESIAN | | 19111 | 4068K| | 1701 (2)| 00:00:21 | | |
|* 30 | TABLE ACCESS FULL | TAB_CHECK | 1 | 49 | | 3 (0)| 00:00:01 | | |
| 31 | BUFFER SORT | | 248K| 40M| | 1698 (2)| 00:00:21 | | |
| 32 | PARTITION RANGE ALL| | 248K| 40M| | 1698 (2)| 00:00:21 | 1 |1048575|
| 33 | TABLE ACCESS FULL | ANALYTIC_E | 248K| 40M| | 1698 (2)| 00:00:21 | 1 |1048575|
| 34 | MERGE JOIN CARTESIAN | | 1 | 537 | | 5 (0)| 00:00:01 | | |
| 35 | PARTITION RANGE ALL | | 1 | 488 | | 2 (0)| 00:00:01 | 1 |1048575|
| 36 | TABLE ACCESS FULL | ANALYTIC_O | 1 | 488 | | 2 (0)| 00:00:01 | 1 |1048575|
| 37 | BUFFER SORT | | 1 | 49 | | 3 (0)| 00:00:01 | | |
|* 38 | TABLE ACCESS FULL | TAB_CHECK | 1 | 49 | | 3 (0)| 00:00:01 | | |
| 39 | PARTITION HASH ALL | | 810K| 16M| | 1456 (2)| 00:00:18 | 1 | 16 |
| 40 | MAT_VIEW ACCESS FULL | VIS | 810K| 16M| | 1456 (2)| 00:00:18 | 1 | 16 |
----------------------------------------------------------------------------------------------------------------------------------------- -
Smart View 11.1.1.2 Data Source Manager is Disabled
I just did a fresh install of Oracle Hyperion EPM 11.1.1.2 in a Windows 2003 Enterprise Server (64-bit) environment.
I'm now having problems running Smart View in this environment.
When I attempt to connect to a cube, from within Excel, the Data Source Manager appears in the right-hand pane, but I can't interact with it. Also, nothing at all appears in the Data Source Manager pane.
I've duplicated this problem in a Windows 7 RC1 64-bit environment as well (Yes, I know. It's not supported).
This looks like some sort of Active X issue.
I am running Office 2007 SP1. However, applying the latest MS Office updates did not alleviate the issue.
Any help or ideas would be appreciated.
Thanks,
Alan Farkasyou will most likely have to use the dropdown and add the server into the shared connections. Once done, it should be there for all users. It is best to add it usiing an ID with Admin access. I think there is a bug that would allow anyone to do it, but I'm not sure what version that was
-
Is it possible to use an iPad camera to view the outside environment on a MAC? I would like to be able to move the iPad to different positions and yet view it on a MAC?
Lightroom lets you export images in any resolution you wish.
I often upload Low Res proofs to my server. -
Dynamics crm 2015 unexpectedly deleted associated view for custom entity
i have contact and custom entity.it has associated view.unfortunately i deleted associated view.while exporting the solution export fail how can i resolve this one.any help would be highly appreciated.
hsk srinivasDo you have the view in another environment (e.g. PROD)? If so, you can create a new Solution there and just add the entity to the solution and Export/Import it back to your "broken" environment.
-
Hi all,
I know there is the possibility to export and import vews. I can see "Import Custom Views and Presentations" option on the projects phane of the Process Administrator. The problem is that I can't find a way to export them. I need to export and re-import views from an environment to another and I wouldn't like to repeat the whole creation process. I'm pretty sure that there is a way, but I feel blind... I can see it :/
I hope someone can help me,
thanks
LucaDolfino,
For absolutely no reason that makes sense, the export/import functionality of views is not available in Workspace Admin, but in Process Administrator (/webconsole). Go to Organization link, and you'll find the necessary options.
As for what's the best practice? Bundle the views with the project? Or maintain them separately? My personal feeling is keeping as much non-process UI garbage out of the project as possible. Views and presentations are a great example of something that I find totally irrelevant in the context of bundling a business process. While that argument is entirely theoretical and a matter of obsessive organization on my part, there's also practical reasons for not doing that as well:
1. You can't create views that span across processes from different projects if you use Studio (without a lot of trouble). For that, you HAVE to go to Workspace Admin.
2. When deploying projects, the embedded views/presentations overwrite what's currently in the directory, which might include modifications you've made from Workspace Admin.
3. You open up a can of worms when multiple projects use a presentation and name it the same way. Then the latest delpoy wins. Yuck!
Thanks
-Wali -
Horizon View Administrator sees all VCenter in linked mode
Hello,
I'm currently evaluating a new client's View 5.2 environment and noticed that in Administrator 2 VC servers are set. View Composer was installed only on the first VC. I'm able to create pools that are using the second VC. However when I add a desktop manually, it seems that the agent is not able to communicate successfully with the connection server and the status in the Administrator is always "Waiting for agent". Confirmed all ports are not filtered and also examined the agent logs - the only valuable entries I've found are from v4v-agent:
11:29:26 0x00000a1c WARNING Failed to get the message server configuration. Error code 0x80070002.
11:29:26 0x00000a1c INFO Started communication manager.
11:29:28 0x00000a1c INFO Initialized the engine.
11:29:36 0x00000a1c INFO Uninitializing the engine (reason 5)...
11:29:36 0x00000a1c INFO Stopped communication manager.
11:29:36 0x00000a1c INFO Uninitialized the engine.
On the Connection server logs, there is nothing related (or probably I'm looking in the wrong files). I know that Composer is a requirement for linked-clones, but what about manual pools? Do I still have to have a Composer on the second VC? Will appreciate any advices. Thanks.Hello,
I had the same problem, all my appstacks deleted with the linked clone. I created different vcenter users for view and for AppVolumes, and I removed all rights for the view vcenter user on the Appvolumes datastores.
this solved it for me.
Kind regards
Cristiano -
Pga_aggregate_target & db_cache_size for an Oracle 10g Datawarehouse env
We have an Oracle 10g Datawarehousing environment , with roughly 180 GB odd data running on 3 - node RAC
with 16 GB RAM & 4 CPUs each and roughly we have 200 users and night jobs running on this D/W .
We find that query performance of all ETL Processes & joins are quite slow .Certain packages take 35 min . or so to get executed.
How much should we modify the value of pga_aggregate_target & db_cache_size parameter for this Datawarehouse environment ? This is a Production database, with Oracle Database 10g Enterprise Edition Release 10.1.0.5.0.
We use OWB 10g Tool for this D/W.
Current PGA_AGGREGATE_TARGET is 8589934592, whereas db_cache_size is
1073741824
Please suggest ,
Thanks a lot in advance ,It is not clear what is meant by term "packages" in this context - i.e. are these PL/SQL packages or some other bunch of pl/sql statements or whatever else, but simply blindly changing init parameters you won't get anything good in a predictable future.
You have to find WHAT is slow and tune that, it might very well be that it has nothing to do with pga_aggregate_target or another init parameters.
Optimizing Oracle Performance by Cary Millsap and Jeff Holt can be the gretest help in this case.
You can also look at his website www.hotsos.com, it has many whitepapers that can help you if you haven't the book.
I for example see " 3 - node RAC" in your post and that makes me suspicious because if you are running batch jobs reading/writing the same data on all 3 nodes simultaneously, then the problem might be data pinging among nodes. BUT I DON'T KNOW FOR SURE, as well as you don't know, so the only predictable scenario is to get to know what is the reason.
Gints Plivna
http://www.gplivna.eu -
Architecture of streams for datawarehouse extract-transform-load operations
We have several 9i Release 2 and 10g Release 2 source databases. Destination datawarehouse database is 10g Release 2.
We want to capture the changes on some operational tables and apply them to our datawarehouse environment, but here I have two questions;
1- Does 9iR2 and 10gR2 source databases need different streams condigurations?
2- How can I implement "a capture queue at source->an apply queue at target->plsql transformation process at target" architecture, any example references available?
I found these two demos;
http://www.psoug.org/reference/streams_demo1.html
http://www.psoug.org/reference/streams_demo2.html
but what we need is a mixture of these two examples.
Also saw this oramag article;
http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
And this article;
http://www.dbasupport.com/oracle/ora10g/downstream.shtml
And these presentations;
http://julian.dyke.users.btopenworld.com/com/Presentations/Presentations.html#Streams
Experiences with Real-Time Data Warehousing Using Oracle Database 10G by Mike Schmitz
But they do not include 9iR2 with 10gR2 source environment and custom transformation plsql steps. Since there is no similar tables at target database we need transformation like; "A table at source system has an insert which will be an update on X table at target system"
Any comments or references would be great,
Best regards.Thanks to Mr.Rittman; http://www.rittmanmead.com/2006/04/14/asynchronous-hotlog-distributed-change-data-capture-and-owb-paris/
for mentioning the best guide I have seen upto now to Asynchronous Change Data Capture of Mr.Mark Van de Weil’s "Asynchronous Change Data Capture Cookbook";
http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_cdc_cookbook_0206.pdf
Maybe you are looking for
-
MacBook Display flashes white, red, blue, grey when starting up.
This problem just started yesterday when I turned on my Mac. It made the apple sound and then it went to a blue screen then started flashing white, red, blue, grey and then a few grey patterns. It went away last night when I left it on for about 5 mi
-
Hello Gurus I've a WAD with some chart, In legend I can show the name but with point over the legend. I changed the field and with the new legend is the same, changed the axis and the point appear in the new legend. for example "good " Regards Jos
-
Want to submit contents of a form to a file how to do it?
want to submit contents of a form to a file how to do it? i have a RSS form contents are : Channel, its id,url,desc,etc. then item,its id,url,desc etc. when i say submit i have to submit all the contents. Now add these contents to a class called item
-
How to diable file path prompt?
Everytime I hit the run button the file path prompt opens up, how can I disable it? I want to manual click the folder icon to open up browse window. Here is my code. Attachments: path.jpg 15 KB
-
Mac OS X: Reader X 10.1.4 crash if printing
Dear readers and professionals A customer of mine does use Acrobat Reader X 10.1.4 under Mac OS X 10.7.4. Every time he does try to print a PDF file (doesn't matter which one), Acroabt Reader does crash. I did manually delete all occurrents of Adobe