Datawarehouse architecture

Hi,
I would like to know about Data warehouse architecture and what are the roles and what documents are suppose to prepare by architect.
regards
naren

Good morning Naren,
Open your internet browser, go to amazon.com and search on "data warehouse", it will come up with a number of useful books.
Basically all Top15 books will provide you with the information you need, and probably even more...
Good luck, Patrick

Similar Messages

  • [newbie] Configuration of Oracle Datawarehouse Builder

    I've been trying to install Oracle Datawarehouse Builder 11g/10g on a WinXP 32bit machine, however, when browsing to the Repository explorer a "page cannot be displayed error" is being displayed. Basically all I've done so far is step through the installation wizard. Basic troubleshooting (see link below) did not improve matters as much; and neither did a nifty google search.
    Has anyone encountered similar issues and found a solution?
    Setting Up the Oracle Warehouse Builder Project
    http://www.oracle.com/technology/obe/11gr1_owb/owb11g_update_extend_knowledge/less1_setting_up/less1_setting_up.html

    No its not a problem to get to the design center. The browser just a helpful tool to see your previous executions/deployments or scheduedules in a easy way.
    There is a greay tutorial that got me started:
    http://www.oracle.com/technology/obe/11gr1_owb/index.htm
    Thing is you will learn to use the tools on that tutorial but not really why you need them in your datawarehouse architecture. The best thing for that is to read a book by Ralph Kimball. The Data Warehouse ETL Toolkit: Practical Techniques for Extracting, Cleaning, Conforming, and Delivering Data.
    But try the tutorial first and get used with the ETL tools.
    Cheers

  • Architecture of streams for datawarehouse extract-transform-load operations

    We have several 9i Release 2 and 10g Release 2 source databases. Destination datawarehouse database is 10g Release 2.
    We want to capture the changes on some operational tables and apply them to our datawarehouse environment, but here I have two questions;
    1- Does 9iR2 and 10gR2 source databases need different streams condigurations?
    2- How can I implement "a capture queue at source->an apply queue at target->plsql transformation process at target" architecture, any example references available?
    I found these two demos;
    http://www.psoug.org/reference/streams_demo1.html
    http://www.psoug.org/reference/streams_demo2.html
    but what we need is a mixture of these two examples.
    Also saw this oramag article;
    http://www.oracle.com/technology/oramag/oracle/04-nov/o64streams.html
    And this article;
    http://www.dbasupport.com/oracle/ora10g/downstream.shtml
    And these presentations;
    http://julian.dyke.users.btopenworld.com/com/Presentations/Presentations.html#Streams
    Experiences with Real-Time Data Warehousing Using Oracle Database 10G by Mike Schmitz
    But they do not include 9iR2 with 10gR2 source environment and custom transformation plsql steps. Since there is no similar tables at target database we need transformation like; "A table at source system has an insert which will be an update on X table at target system"
    Any comments or references would be great,
    Best regards.

    Thanks to Mr.Rittman; http://www.rittmanmead.com/2006/04/14/asynchronous-hotlog-distributed-change-data-capture-and-owb-paris/
    for mentioning the best guide I have seen upto now to Asynchronous Change Data Capture of Mr.Mark Van de Weil’s "Asynchronous Change Data Capture Cookbook";
    http://www.oracle.com/technology/products/bi/db/10g/pdf/twp_cdc_cookbook_0206.pdf

  • ASM use for small scale datawarehouse ?

    Hello guys,
    We are working on a Datawarehouse (ard 50G ) architecture with the following acquired environment:
    Single server X3650 M4 Dual CPU ( 16 core in total ) with 48G ram
    Oracle standard 10g x64
    Windows 2008 x64
    128 SSD x 8
    IBM ServeRAID M5110e SAS/SATA Controller
    Due to budget concern, we will be running the App server(Business OBjects 4.0 w/ Tomcat and DB server on the same machine. )
    We have a user base of around 30 ppl on the app server.
    We intend to have external redundancy using IBM raid card on raid 10 configuration. I wonder what kind of disk config yield better performance if we only have write update in the morning and 95% read for the rest ?
    Raid 1 for OS (128SSD x 2 including DB logfile )
    Raid 10 for DB server ( 128 SSD x 6 )
    I heard ASM provides better disk management but just wonder it helps in performance in anyway.
    Any advice on better disk configuration, segment size, block size ?
    thanks
    Clement

    Hi,
    ASM does provide redundancy but you can also configure external redundancy at SAN level, please go through below links for more information.
    ASM Redundancy (Normal or High) vs External Redundancy
    Re: ASM Redundancy (Normal or High) vs External Redundancy
    Database Storage Administrator's Guide
    http://docs.oracle.com/cd/B28359_01/server.111/b31107/asmcon.htm

  • SQL Server DW architecture question

    Hi guys,
    I was planning the architecture of a SQL Server Datawarehouse project (SQL Server 2014 and Sharepoint 2013) and I was thinking in 3 servers:
    1 - SQL Server Database Engine and Integration Services
    2 - SQL Server Analysis Services, using tabular models to support Powerpivot and Powerview
    3 - SQL Server Reporting Services and  Sharepoint
    I want to know from your experience if there is any issue with this approach and to know if it's better to install the database of Reporting Services and Sharepoint:
    a) on server 1, creating a specific instance for ReportServer and a specific instance for Sharepoint, or on the same instance as the DW databases. My issue with this is that sharepoint installs so many databases with some kind of hash keys on their name
    that is visually annoying
    b) on server 3 on the same server of SSRS and Sharepoint
    Other approaches are welcome.
    Thank you

    Hi guys,
    I was planning the architecture of a SQL Server Datawarehouse project (SQL Server 2014 and Sharepoint 2013) and I was thinking in 3 servers:
    1 - SQL Server Database Engine and Integration Services
    2 - SQL Server Analysis Services, using tabular models to support Powerpivot and Powerview
    3 - SQL Server Reporting Services and  Sharepoint
    I want to know from your experience if there is any issue with this approach and to know if it's better to install the database of Reporting Services and Sharepoint:
    a) on server 1, creating a specific instance for ReportServer and a specific instance for Sharepoint, or on the same instance as the DW databases. My issue with this is that sharepoint installs so many databases with some kind of hash keys on their name
    that is visually annoying
    b) on server 3 on the same server of SSRS and Sharepoint
    Other approaches are welcome.
    Thank you
    We're currently in process of implementing datawarehouse project and our configuration is similar to yours. We're using a standalone server for SSAS (Multidimensional) so if its Tabular you might need more memory on the server. We're using a shared reporting
    server( which have other report application as well like cognos).
    The database/ETL server is same. ETL we're using SSIS 2012 packages with SSIS catalog (project deployment). We also have a Oracle CDC instance running in it to pull deltas from Oracle DB which is one of our sources
    As far as current performance goes its not having any issues but we dont have much data now coming.
    Please Mark This As Answer if it solved your issue
    Please Vote This As Helpful if it helps to solve your issue
    Visakh
    My Wiki User Page
    My MSDN Page
    My Personal Blog
    My Facebook Page

  • Datawarehouse with dependent datamarts

    Hello Everyone,
    Could you please guide me on how to
    1)Create a datawarehouse with a focus to create dependent datamarts on top of it
    2)Simplest procedure to load the dtawarehouse to these datamarts
    3)Can I have multiple fact tables in the dtawarehouse with each fact table and set of dimensions catering to datamart?
    4)Additionally can we load data directly into these datamarts without loading them to Datawarehouse?
    Any reference material to deal with this scenario would be highly appreciated
    Thanks Much,
    Sri

    Hi,
    not to use a central datawarehouse causes several BIG problems. You will build cubes for a specific department and cannot integrate them including their etl processes for other departements. Larissa Moss calls it 'swim lanes'! I would never do this, it's a low level of maturity for a dwh. This was done some years ago! Golden rule: Think big, start small. You can build cubes without building the complete enterprise model for your whole company. Just model the parts you need for your project - but think about that it must fit for other departments too.
    And try to put all department specific logic in the load processes from the dwh to the data marts. Then you can reuse all data and load processes of your dwh to build some data marts for other departments. If you load data marts straight ot of the source the data marts for other departments (which can have only some small changes to the existing ones) must be build from the scratch. You will have much dedundant code, a really bad architecture, too much process flows, ...
    Regards,
    Detlef

  • Converting an OLTP oracle 9i DB to a Datawarehouse DB

    Hi Gurus,
    I have a 9.2.0.6 DB running as an OLTP DB. now customer want us to configure this same DB as datawarehouse in same server.
    need a quick help from you guys on this please... !!
    appreciate any quick responese... thanks in advance...

    Again, not too clear what it is you, or your client is asking. Is this server:
    1. Being re-commissioned as a DWH server
    2. Being asked to host another Oracle instance
    3. Being asked to use a single instance for both OLTP and DWH responsiblities
    Which is it please?
    In regards to steps for transformation and hardware, those will actually come from the business requirements. You need to gather the requirements for the warehouse from the business owners and developers of the warehouse.
    How much data will the warehouse hold and for how long?
    What is the expected growth rate?
    What will be the archive and purge policy?
    What is the expected IOPS?
    How much data do you expect to process per month, per day, or per hour?
    Is there a requirement for a near-real time warehouse, or is processing only hourly, daily, monthly?
    So many questions... these will determine your hardware requirements, physical and logical design.
    Oracle along with vendors such as Sun, Dell, HP, IBM, not only offer warehouse reference architectures, but now also different 'warehouse appliances' (aka Oracle Optimized Warehouse Initiative)which are complete systems for warehouse databases based on size, and usage. See this link (http://www.oracle.com/solutions/business_intelligence/optimized-warehouse-initiative.html) for a look at the designs for a baseline to get you started.

  • Datamart - Global datawarehouse

    Dear Chetan(CP) ,
    What is GENERATE EXPORT DATASOURCE
    I went through your link and also tried calling you/left voicemail (guess travelling),I am in need of this concept and architecture urgently at my client side ,can you please send some snapshots with explainations how to model global datawarehouse..I have the SAP architecture downloaded from market place and SDN ,but I need in specifics abt this scenario  with nw2004s architecture and your beautiful narrations.
    In case you deleted my email address here it is again ..
    [email protected]
    thanks in advance
    moini

    Dear Chetan ,
    Thanks for your utmost help.
    Me...

  • I recently ran monolingual and removed all but the intel-64 bit architectures.  Now my iphoto will not open.  Here's the message  that I get. Process:         iPhoto [3543] Path:            /Applications/iPhoto.app/Contents/MacOS/iPhoto Identifier:

    I recently ran monolingual and removed all but the intel-64 bit architectures.  Now my iphoto (along with Idvd, garage band, imovie) will not open.  Here is the message that I get.
    Process:         iPhoto [3543]
    Path:            /Applications/iPhoto.app/Contents/MacOS/iPhoto
    Identifier:      com.apple.iPhoto
    Version:         ??? (???)
    Build Info:      iPhotoProject-4750000~1
    Code Type:       X86 (Native)
    Parent Process:  launchd [109]
    Date/Time:       2011-06-10 21:48:59.821 -0500
    OS Version:      Mac OS X 10.6.7 (10J869)
    Report Version:  6
    Interval Since Last Report:          -4164908 sec
    Crashes Since Last Report:           8
    Per-App Crashes Since Last Report:   11
    Anonymous UUID:                      45357CCD-011B-482E-A2EA-CF42096F1321
    Exception Type:  EXC_BREAKPOINT (SIGTRAP)
    Exception Codes: 0x0000000000000002, 0x0000000000000000
    Crashed Thread:  0
    Dyld Error Message:
      Library not loaded: /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow
      Referenced from: /Applications/iPhoto.app/Contents/MacOS/iPhoto
      Reason: no suitable image found.  Did find:
              /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow: mach-o, but wrong architecture
              /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow: mach-o, but wrong architecture
    Binary Images:
    0x8fe00000 - 0x8fe4162b  dyld 132.1 (???) <1C06ECD9-A2D7-BB10-AF50-0F2B598A7DEC> /usr/lib/dyld
    Model: iMac10,1, BootROM IM101.00CC.B00, 2 processors, Intel Core 2 Duo, 3.06 GHz, 4 GB, SMC 1.53f13
    Graphics: ATI Radeon HD 4670, ATI Radeon HD 4670, PCIe, 256 MB
    Memory Module: global_name
    AirPort: spairport_wireless_card_type_airport_extreme (0x168C, 0x8F), Atheros 9280: 2.1.14.5
    Bluetooth: Version 2.4.0f1, 2 service, 19 devices, 1 incoming serial ports
    Network Service: Built-in Ethernet, Ethernet, en0
    Serial ATA Device: ST31000528ASQ, 931.51 GB
    Serial ATA Device: OPTIARC DVD RW AD-5680H
    USB Device: USB2.0 Hub, 0x05e3  (Genesys Logic, Inc.), 0x0608, 0x24300000
    USB Device: Built-in iSight, 0x05ac  (Apple Inc.), 0x8502, 0x24400000
    USB Device: External HDD, 0x1058  (Western Digital Technologies, Inc.), 0x0901, 0x26400000
    USB Device: Internal Memory Card Reader, 0x05ac  (Apple Inc.), 0x8403, 0x26500000
    USB Device: IR Receiver, 0x05ac  (Apple Inc.), 0x8242, 0x04500000
    USB Device: BRCM2046 Hub, 0x0a5c  (Broadcom Corp.), 0x4500, 0x06100000
    USB Device: Bluetooth USB Host Controller, 0x05ac  (Apple Inc.), 0x8215, 0x06110000

    Please let me know when you find a fix. I did the same thing and have tried every suggestion I can find online. The message I get is...
    Process:         iPhoto [4991]
    Path:            /Applications/iPhoto.app/Contents/MacOS/iPhoto
    Identifier:      com.apple.iPhoto
    Version:         ??? (???)
    Build Info:      iPhotoProject-6070000~1
    Code Type:       X86 (Native)
    Parent Process:  launchd [142]
    Date/Time:       2011-06-13 23:39:38.485 +1200
    OS Version:      Mac OS X 10.6.7 (10J869)
    Report Version:  6
    Interval Since Last Report:          -1643976 sec
    Crashes Since Last Report:           35
    Per-App Crashes Since Last Report:   12
    Anonymous UUID:                      D4811036-EA8D-479D-8D9F-11E2FC8F6D4C
    Exception Type:  EXC_BREAKPOINT (SIGTRAP)
    Exception Codes: 0x0000000000000002, 0x0000000000000000
    Crashed Thread:  0
    Dyld Error Message:
      Library not loaded: /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow
      Referenced from: /Applications/iPhoto.app/Contents/MacOS/iPhoto
      Reason: no suitable image found.  Did find:
              /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow: mach-o, but wrong architecture
              /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow: mach-o, but wrong architecture
    Binary Images:
    0x8fe00000 - 0x8fe4162b  dyld 132.1 (???) <1C06ECD9-A2D7-BB10-AF50-0F2B598A7DEC> /usr/lib/dyld
    Model: MacBookPro7,1, BootROM MBP71.0039.B0B, 2 processors, Intel Core 2 Duo, 2.4 GHz, 4 GB, SMC 1.62f6
    Graphics: NVIDIA GeForce 320M, NVIDIA GeForce 320M, PCI, 256 MB
    Memory Module: global_name
    AirPort: spairport_wireless_card_type_airport_extreme (0x14E4, 0x8D), Broadcom BCM43xx 1.0 (5.10.131.36.9)
    Bluetooth: Version 2.4.0f1, 2 service, 19 devices, 1 incoming serial ports
    Network Service: AirPort, AirPort, en1
    Serial ATA Device: Hitachi HTS545025B9SA02, 232.89 GB
    Serial ATA Device: MATSHITADVD-R   UJ-898, 3.5 GB
    USB Device: Internal Memory Card Reader, 0x05ac  (Apple Inc.), 0x8403, 0x26100000
    USB Device: Built-in iSight, 0x05ac  (Apple Inc.), 0x8507, 0x24600000
    USB Device: BRCM2046 Hub, 0x0a5c  (Broadcom Corp.), 0x4500, 0x06600000
    USB Device: Bluetooth USB Host Controller, 0x05ac  (Apple Inc.), 0x8213, 0x06610000
    USB Device: IR Receiver, 0x05ac  (Apple Inc.), 0x8242, 0x06500000
    USB Device: Apple Internal Keyboard / Trackpad, 0x05ac  (Apple Inc.), 0x0236, 0x06300000
    I have reinstalled Mac OSX 10.6.3 and done the updates from there.
    I have reinstalled ilife 11 from disk and done the updates.
    I have deleted all the suggested files and then redone install and updates.
    I have tried just reinstalling iphoto and doing updates.
    Is there any way to get a replacement -  /Library/Frameworks/iLifeSlideshow.framework/Versions/A/iLifeSlideshow
    file with the right architecture?

  • Help needed for hash_area_size setting for Datawarehouse environment

    We have an Oracle 10g Datawarehousing environment , running on 3 - node RAC
    with 16 GB RAM & 4 CPUs each and roughly we have 200 users and night jobs running on this D/W .
    We find that query performance of all ETL Processes & joins are quite slow .
    How much should we increase the value of hash_area_size parameter for this Datawarehouse environment ? This is a Production database, with Oracle Database 10g Enterprise Edition Release 10.1.0.5.0.
    We use OWB 10g Tool for this D/W and we need to change the hash_area_size to increase the performance of the ETL Processes.
    This is the Oracle init parameter settings used, as shown below : -
    Kindly suggest ,
    Thanks & best regards ,
    ===========================================================
         ORBIT
    __db_cache_size     1073741824
    __java_pool_size     67108864
    __large_pool_size     318767104
    __shared_pool_size     1744830464
    optimizercost_based_transformation     OFF
    active_instance_count     
    aq_tm_processes     1
    archive_lag_target     0
    asm_diskgroups     
    asm_diskstring     
    asm_power_limit     1
    audit_file_dest     /dboracle/orabase/product/10.1.0/rdbms/audit
    audit_sys_operations     FALSE
    audit_trail     NONE
    background_core_dump     partial
    background_dump_dest     /dborafiles/orbit/ORBIT01/admin/bdump
    backup_tape_io_slaves     TRUE
    bitmap_merge_area_size     1048576
    blank_trimming     FALSE
    buffer_pool_keep     
    buffer_pool_recycle     
    circuits     
    cluster_database     TRUE
    cluster_database_instances     3
    cluster_interconnects     
    commit_point_strength     1
    compatible     10.1.0
    control_file_record_keep_time     90
    control_files     #NAME?
    core_dump_dest     /dborafiles/orbit/ORBIT01/admin/cdump
    cpu_count     4
    create_bitmap_area_size     8388608
    create_stored_outlines     
    cursor_sharing     EXACT
    cursor_space_for_time     FALSE
    db_16k_cache_size     0
    db_2k_cache_size     0
    db_32k_cache_size     0
    db_4k_cache_size     0
    db_8k_cache_size     0
    db_block_buffers     0
    db_block_checking     FALSE
    db_block_checksum     TRUE
    db_block_size     8192
    db_cache_advice     ON
    db_cache_size     1073741824
    db_create_file_dest     #NAME?
    db_create_online_log_dest_1     #NAME?
    db_create_online_log_dest_2     #NAME?
    db_create_online_log_dest_3     
    db_create_online_log_dest_4     
    db_create_online_log_dest_5     
    db_domain     
    db_file_multiblock_read_count     64
    db_file_name_convert     
    db_files     999
    db_flashback_retention_target     1440
    db_keep_cache_size     0
    db_name     ORBIT
    db_recovery_file_dest     #NAME?
    db_recovery_file_dest_size     2.62144E+11
    db_recycle_cache_size     0
    db_unique_name     ORBIT
    db_writer_processes     1
    dbwr_io_slaves     0
    ddl_wait_for_locks     FALSE
    dg_broker_config_file1     /dboracle/orabase/product/10.1.0/dbs/dr1ORBIT.dat
    dg_broker_config_file2     /dboracle/orabase/product/10.1.0/dbs/dr2ORBIT.dat
    dg_broker_start     FALSE
    disk_asynch_io     TRUE
    dispatchers     
    distributed_lock_timeout     60
    dml_locks     9700
    drs_start     FALSE
    enqueue_resources     10719
    event     
    fal_client     
    fal_server     
    fast_start_io_target     0
    fast_start_mttr_target     0
    fast_start_parallel_rollback     LOW
    file_mapping     FALSE
    fileio_network_adapters     
    filesystemio_options     asynch
    fixed_date     
    gc_files_to_locks     
    gcs_server_processes     2
    global_context_pool_size     
    global_names     FALSE
    hash_area_size     131072
    hi_shared_memory_address     0
    hpux_sched_noage     0
    hs_autoregister     TRUE
    ifile     
    instance_groups     
    instance_name     ORBIT01
    instance_number     1
    instance_type     RDBMS
    java_max_sessionspace_size     0
    java_pool_size     67108864
    java_soft_sessionspace_limit     0
    job_queue_processes     10
    large_pool_size     318767104
    ldap_directory_access     NONE
    license_max_sessions     0
    license_max_users     0
    license_sessions_warning     0
    local_listener     
    lock_name_space     
    lock_sga     FALSE
    log_archive_config     
    log_archive_dest     
    log_archive_dest_1     LOCATION=+ORBT_A06635_DATA1_ASM/ORBIT/ARCHIVELOG/
    log_archive_dest_10     
    log_archive_dest_2     
    log_archive_dest_3     
    log_archive_dest_4     
    log_archive_dest_5     
    log_archive_dest_6     
    log_archive_dest_7     
    log_archive_dest_8     
    log_archive_dest_9     
    log_archive_dest_state_1     enable
    log_archive_dest_state_10     enable
    log_archive_dest_state_2     enable
    log_archive_dest_state_3     enable
    log_archive_dest_state_4     enable
    log_archive_dest_state_5     enable
    log_archive_dest_state_6     enable
    log_archive_dest_state_7     enable
    log_archive_dest_state_8     enable
    log_archive_dest_state_9     enable
    log_archive_duplex_dest     
    log_archive_format     %t_%s_%r.arc
    log_archive_local_first     TRUE
    log_archive_max_processes     2
    log_archive_min_succeed_dest     1
    log_archive_start     FALSE
    log_archive_trace     0
    log_buffer     1167360
    log_checkpoint_interval     0
    log_checkpoint_timeout     1800
    log_checkpoints_to_alert     FALSE
    log_file_name_convert     
    logmnr_max_persistent_sessions     1
    max_commit_propagation_delay     700
    max_dispatchers     
    max_dump_file_size     UNLIMITED
    max_enabled_roles     150
    max_shared_servers     
    nls_calendar     
    nls_comp     
    nls_currency     #
    nls_date_format     DD-MON-RRRR
    nls_date_language     ENGLISH
    nls_dual_currency     ?
    nls_iso_currency     UNITED KINGDOM
    nls_language     ENGLISH
    nls_length_semantics     BYTE
    nls_nchar_conv_excp     FALSE
    nls_numeric_characters     
    nls_sort     
    nls_territory     UNITED KINGDOM
    nls_time_format     HH24.MI.SSXFF
    nls_time_tz_format     HH24.MI.SSXFF TZR
    nls_timestamp_format     DD-MON-RR HH24.MI.SSXFF
    nls_timestamp_tz_format     DD-MON-RR HH24.MI.SSXFF TZR
    O7_DICTIONARY_ACCESSIBILITY     FALSE
    object_cache_max_size_percent     10
    object_cache_optimal_size     102400
    olap_page_pool_size     0
    open_cursors     1024
    open_links     4
    open_links_per_instance     4
    optimizer_dynamic_sampling     2
    optimizer_features_enable     10.1.0.5
    optimizer_index_caching     0
    optimizer_index_cost_adj     100
    optimizer_mode     ALL_ROWS
    os_authent_prefix     ops$
    os_roles     FALSE
    parallel_adaptive_multi_user     TRUE
    parallel_automatic_tuning     TRUE
    parallel_execution_message_size     4096
    parallel_instance_group     
    parallel_max_servers     80
    parallel_min_percent     0
    parallel_min_servers     0
    parallel_server     TRUE
    parallel_server_instances     3
    parallel_threads_per_cpu     2
    pga_aggregate_target     8589934592
    plsql_code_type     INTERPRETED
    plsql_compiler_flags     INTERPRETED
    plsql_debug     FALSE
    plsql_native_library_dir     
    plsql_native_library_subdir_count     0
    plsql_optimize_level     2
    plsql_v2_compatibility     FALSE
    plsql_warnings     DISABLE:ALL
    pre_page_sga     FALSE
    processes     600
    query_rewrite_enabled     TRUE
    query_rewrite_integrity     enforced
    rdbms_server_dn     
    read_only_open_delayed     FALSE
    recovery_parallelism     0
    remote_archive_enable     TRUE
    remote_dependencies_mode     TIMESTAMP
    remote_listener     
    remote_login_passwordfile     EXCLUSIVE
    remote_os_authent     FALSE
    remote_os_roles     FALSE
    replication_dependency_tracking     TRUE
    resource_limit     FALSE
    resource_manager_plan     
    resumable_timeout     0
    rollback_segments     
    serial_reuse     disable
    service_names     ORBIT
    session_cached_cursors     0
    session_max_open_files     10
    sessions     2205
    sga_max_size     3221225472
    sga_target     3221225472
    shadow_core_dump     partial
    shared_memory_address     0
    shared_pool_reserved_size     102760448
    shared_pool_size     318767104
    shared_server_sessions     
    shared_servers     0
    skip_unusable_indexes     TRUE
    smtp_out_server     
    sort_area_retained_size     0
    sort_area_size     65536
    sp_name     ORBIT
    spfile     #NAME?
    sql_trace     FALSE
    sql_version     NATIVE
    sql92_security     FALSE
    sqltune_category     DEFAULT
    standby_archive_dest     ?/dbs/arch
    standby_file_management     MANUAL
    star_transformation_enabled     TRUE
    statistics_level     TYPICAL
    streams_pool_size     0
    tape_asynch_io     TRUE
    thread     1
    timed_os_statistics     0
    timed_statistics     TRUE
    trace_enabled     TRUE
    tracefile_identifier     
    transactions     2425
    transactions_per_rollback_segment     5
    undo_management     AUTO
    undo_retention     7200
    undo_tablespace     UNDOTBS1
    use_indirect_data_buffers     FALSE
    user_dump_dest     /dborafiles/orbit/ORBIT01/admin/udump
    utl_file_dir     /orbit_serial/oracle/utl_out
    workarea_size_policy     AUTO

    The parameters are already unset in the environment, but do show up in v$parameter, much like shared_pool_size is visible in v$parameter despite only sga_target being set.
    SQL> show parameter sort
    NAME TYPE VALUE
    sortelimination_cost_ratio integer 5
    nls_sort string binary
    sort_area_retained_size integer 0
    sort_area_size integer 65536
    SQL> show parameter hash
    NAME TYPE VALUE
    hash_area_size integer 131072
    SQL> exit
    Only set hash_area_size and sort_area_size should only be set when not using automatic undo, which is not supported in EBS databases.
    Database Initialization Parameters for Oracle Applications 11i
    http://metalink.oracle.com/metalink/plsql/ml2_documents.showDocument?p_database_id=NOT&p_id=216205.1

  • What architecture is best for accurate data logging

    Hello,
    I'm desiging some LabVIEW code for a standard DAQ application that is required to plot about 100 variables onto the screen on several different graphs and numeric indicators, as well as perform some simple feedback control and log data into a file once a second.
    I've done this before, and used a simple state machine architecture, where one state takes care of my logging, and I have a timer vi in there that countsdown 1 second and then writes to file.  However, this method makes me miss a second every once in a while.
    I started looking into the producer/consumer architecture as a possible remedy for this.  Because I hear it's good for running two things at different times, so I"ll have my quicker loop handling data acquistion, plots and feedback control, and my slower logging loop, executing once a second.  But I don't see how to implement this
    questions:
    1. is a simple producer consumer the right topology for my application?
    2. when I create my queue do I create it a 100 element array (my data for logging) and then enqueue that in my producer loop from my data acquistion, then pass that to the logging VI.... this seems wrong to me, cause I'm going to be enqueing alot of 100 element arrays... and will be de-queing them slowly at once a second..
    3. How do I trigger my consumer loop to execute every second, should I set it up as a timed while loop? or should something from the producer loop tell it to?
    I'm sure this is a pretty standard thing to do, I'm jus tnot sure how to implment the correct architecture.
    much thanks! 

    Ok, let's try this.  I've put together an example that should do what you need.  I put notes in the block diagram, but essentially it runs data in a while loop at whatever execution rate you specify, then sends the data to another graph (or in your case, a log) every one second.  Basically, I've used a 100ms execution rate for the while loop, then every 10th time (you can change this if you want), it sends a boolean 'true' to a case structure within the while loop that contains the enqueue element.  The graphs that I included show that it does indeed add a new point to the second graph once a second while the first one is adding a point every 100ms.
    The actual wiring of this Vi could be cleaner for sure, but it was a quick and dirty example I put together.  Hopefully this will help you accomplish what you're trying to do.
    Regards,
    Austin S.
    National Instruments
    Academic Field Engineer
    Attachments:
    Enqueue array 2.vi ‏28 KB

  • Single Sign on in a 3 tier architecture between SAP Netweaver CE and R/3

    Hi All,
    I am trying to implement SSO using SAP logon tickets in a 3 tier architecture between NW CE and R/3. But so far I have not been able to crack this.
    Let me describe the scenario in detail:
    We have two Java EE applications on Netweaver CE7.2 Application Server:
    1. UI: Just handles all the UI logic : js, jsp, css, html, extjs .It calls the Business Layer Java EE application to get data from back-end systems.
    2. Business Layer: Calls R/3 SOAP services does some processing on them and exposes the data back to the UI via a Restful JSON service (implemented using Java Spring framework)
    Both UI and Business Layer Java EE applications define login modules to be used for SAP logon tickets. So the architecture is like this:
    UI-RESTfull-->Business LayerSOAP->ABAP R/3
    So ideally when the UI link is clicked it prompts the user for authentication (uses CE UME) and then the UI applications calls the Business Layer which then calls R/3. This unfortunately doesn't work. The authentication between UI and Business Layer Application fails.
    However if you remove the Business Layer Java EE application and call the SOAP service directly from the UI. SAP logon tickets starts working.
    So I have been able to make SAP logon tickets work with the following 2 tier architecture:
    UI---SOAP--->R/3
    So my Question is:
    Is there a way to use SAP logon tickets in a 3 tier architecture between NW CE and R/3 (For the scenario described above)? Any help/pointers/documentation links would be great

    Hey Martin,
    To enable SSO I updated web.xml and engine-j2ee.xml for both UI and Business Layer application according to the login module stacks defined (the first one) in the following link:
    http://help.sap.com/saphelp_NW70EHP1/helpdata/en/04/120b40c6c01961e10000000a155106/content.htm
    Initially both UI and Business Layer had the same entries for web.xml and engine.xml. But since this was not working I did all kinds of testing. For UI i used FORM based authentication and for Business Layer I was using "BASIC" authentication.
    I tested the following Scenarios:
    1. Without any changes to the above XML files: The Business layer rejects any requests from the UI . I checked the Browser and "MYSAPSSO2" cookie was created. Somehow UI doesnt use this to call Business Layer. Or the Business Layer rejects the token itself
    2. I removed authentication from the Business Layer application (Web.xml) keeping the UI same: The call went to R3 but returned a "UnAuthorized" error back. In this case also at the browser level "MYSAPSSO2" token was created but was not used by the business layer to call R3.
    3. The did all sorts of permutation and combination with the sample login modules provided (See link above) on both UI and Business Layer application . Nothing worked. All combinations led to two results which were the same as 1 and 2
    It seems all this is happening because of another application in between UI and R3.
    Hope this Clarifies.
    Thanks,
    Dhannajay

  • Need help to implement datamart or datawarehouse

    Hi all,
    we want to improve our reporting activities, we have 3 production and relational oracle databases and we want to elaborate 1 database as reporting database with historized and aggregated data responding to our reporting needs.
    The database we are using are oracle database 10g.
    actually we still are doing query to retrieve informations from databases for reporting purpose, but fropm inetrnet search i know that we can implement datamart or datawarehouse to group all aggregated information for reporting.
    The information i need are: is there a tools in Oracle for Datawarehousing, Is Oracle Warehouse Builder is the right tools as the sources of our data are all from Oracle database and some flat files.
    Could yo advise what i'm going to use for that kind of reporting needs, can i use Oracle warehosue builder to develop ETl ...
    Do i need license to use Oracle Warehose Builder
    Thanks you for your help.

    For OWB and Database and Datawarehouse , I would recomend the Oracle ' Documentation
    http://www.oracle.sh.cn/nav/portal_6.htm
    Then for tutirials check this (Database and OWB)
    http://www.oracle.com/technology/obe/start/index.html
    For OBIEE
    http://download.oracle.com/docs/cd/E10415_01/doc/index.htm
    What is the meaning of OBIEE is it Oracle Business Intelligence Enterprsie EditionYes
    Cheers
    Nawneet
    (Mark the asnwer as helpful of correct if it is)

  • Books about MVVM, architecture, design patterns for Windows Phone 8.1

    Hi,
    I'm looking for a book or books (or other resources) that explain how to develop an app with a proper architecture. I mean what each layer (bussines layer, data layer, network access) should do and how should it look like. I'm also looking for a book about
    MVVM.
    Right now I'm struggling with how to create a layer for network communication - how to separate classes for requests and responses, how to manage requests and create some queue of requests and also to provide some way to cancel them when they are no longer
    needed, how to work with servers that use some level of security (cookies, certificates etc.).
    Another thing is caching - how to design a short-term cache or a persistant cache (database), what technologies I can use etc.
    Last thing that I'm struggling with is also naming. How to name classes in those layers, e.g. to distinguish between classes mapping data from some ORM database, for mapping on JSON in network communication. etc.
    I hope you got the idea :)
    Thanks.

    Currently, I don't find a book about MVVM pattern for Windows Phone 8.1, but I think MSDN and some blogs have some useful samples and conceptions: http://msdn.microsoft.com/en-us/library/windows/apps/jj883732.aspx
    http://channel9.msdn.com/Series/Windows-Phone-8-1-Development-for-Absolute-Beginners
    And I think your question includes too much scopes, maybe you need to split it into some blocks and get help in the related forum
    Best Regards,
    Please remember to mark the replies as answers if they help

  • So slow performance I can see bad code architecture!  Help!

    I've been a Mac user since 2002, I've resisted the dark side until then.   And I've never had the kinds of problems I am having for the past year and I can't figure it out at all.  I would really like to reinstall Mac OS X 10.7, but I have never used Timemachine to restore data and I'm not sure how the entire process works.  Can someone tell me or give me a relevant link for how to reinstall the entire OS cleanly, without keeping anything, and then to repopulate my home directory with the data from the most recent Timemachine backup.  Note that my backup sits on an external USB drive, which is partitioned so that the first 500GB is used by the Timemachine and the rest (about 1TB) is storage space I keep files on (which are obviously not backed up!).  So I'd need to restore from the first partition of the external drive.  I am really worried that I'll have to hunt down and reinstall all the other apps I have on the system, some of which may not be avilable for download any more perhaps?  Or maybe I'd lose some settings and preferences I like?  Does the Timemachine backup that type of stuff, too, along with the home folder?  I don't know..   I have relied on maybe 1-2 unix tools, too, but I think they are in my home directory, though I'm not sure! 
    The real solution is that I'd just like to speed things up a bit, and not have to go through the reinstall process.  It is a dual core 2ghz machine with 2GB ram.  It's a 13" Macbook mid-2006, so kinda old.  But it used to run flawlessly up unti labout a year ago!  I've run disk utility from Restore mode and it found no errors and there were no permissions to be fixed either.  Here is the break down of the symptoms this machine is suffering with.
    It takes 10 seconds for the hidden dock to appear when I move the mouse to the left edge of the screen, sometimes the beachball appears first.  The CPU monitor does not rise above 15% ever!
    Activating Expose by moving the mouse to the bottom left corner takes roughly 7 seconds or longer.  Beachball never appars, CPU never above 20% either!
    Double click on a desktop folder can take up to 10 seconds to highlight in blue the icon, and then a further 20-30 seconds to actually open the folder.  Sometimes beachball appears twice or three times, the CPU isn't above 15% ever either.
    Creating a new folder reveals a huge flaw in Apple's OS architecture due to the extreme latency of the system.  I can see the folder icon appear first, then a few seconds later the text to the right appears and it says "Untitled Folder", and then the text gets highlighted.  So they're wasting CPU time by showing it once and then highlighting it.  Then I type the new name and I press enter.  Instead of the new name appearing, I can see a further stupidity in design by Apple, because it reverts to "Untitled Folder" for about 5-10 seconds, and then the text is replaced with what I had typed.
    In iPhoto if I select Export and the dialog box is supposed to appear, it doesn't happen the first time I select Export, ever!  I always have to do it twice or three times, and then the dialog box appears.
    In TextEdit, when I select Save for an untitled document, the dialog box appears, but the "File Type" combo box is wrongly placed.  Half of it is on the modal dialog, and the other half is on the desktop.  If I resize the window, the jaggedly placed selection box remains attached at it's mid-point to the right edge of the dialog box, so always half of it is on the desktop!  This happens about 50% of the time.
    If I click on the TimeMachine menu icon (little circular arrow) I get the beachball and then after 15-20 seconds the menu appears!  If I click due to lack of patience many times, the clicks get queued up and then after 20 seconds they all get executed in rapid succession, so the menu opens and closes 3-4 times in a row! 
    The above happens for any menu bar icon, including WIFI, etc...
    Timemachine's "Preparing to backup" can take 10 minutes.  The actual backup takes about 15 minutes, even though it's only backing up 100MB.  And it's always backing up 100MB, every freaking hour (how stupid of Apple not to provide a human-friendly way to control the frequency!).  So for 30 minutes of every hour, Timemachine is backing up.
    I keep my iTunes library on my external drive on the storage partition.  When I'm playing music, all works fine.  When I stop playing music, the external driev spins down.  That's how it should be, to extend it's life.  But OS X insists on waking up the drive every 15 minutes or so even when no application is using it's data.  Unfortunately OS X is so terribly designed, that whenever an exteranl drive is being woken up the entire graphical interface completely and totally locks up for about 5-6 seconds.  Streaming music may continue to play, but nothing else reacts to user input.  Text typing freezes, mouse clicks don't work, things cannot be dragged, cannot switch apps, etc.  There are a ton of people affected by this problem all posting on various forums, so reformatting and reinstalling won't fix this... It's another Apple criplpling effect!  It's been present for at least 2 major versions now! 
    If I am typing an email in Chrome on gmail.com, occasionally the browser tab freezes and refuses to let me type anything for 10 seconds, and when this happens, iTunes starts chopping out for a second or two several times duirng the 10 codn period, as if it can't handle the load.  However CPU stays below 15% during this whole time, too!
    50% of the time when I wake the system from sleep it does not connect to WIFI.  I have to click on the menubar and select my network and then it connects.  My router is not flaky, my other computers do not have this problem, ever!  I have every OS imaginable, too
    Despite all these problems, I have no problems running music apps such as Reason or Garageband, with one caveat.  Garageband struggles to playback songs I've composed on a 17" Powerbook G4!!!!  They have maybe 20 tracks of instruments/audio files, and that ancient CPU could play them without stopping.  This duo core 2GHZ with 2GB of ram periodically reports an overload and stops playback.  The CPU does not even reach 70% during this problematic portion.  Reason never complains, it's always below 10% no matter what I'm doing, and never gives me any problems whatsoever.
    Launching any application takes a full minute, or longer.  Chrome takes 2 full minutes to stop bouncing!  Apple's Mail app takes a minute or a bit longer!
    A basic Google search (typing text into Chrome's bar and pressing enter) takes 25-45 seconds to return results, despite the fact that I can download and acecss streaming and other content instantly from Netflix and other places without any delays!! 
    ***?!?!?!?    Is this a virus or something?!?!?!?  I only install software I buy from the App Store or at a store (beleive it or not there still are boxed softwares, those are super expensive though, hehe!)
    HELP!! 
    p.s. the third option I am thinking of is creating another user account and moving everything over.  But I don't know how to do that safely.  For example, I think iTunes doesn't properly store it's data, and even though I've told it to keep the library on the external drive, the "library" files (XML or wahtever) are kept on the local disk in the home directory?  I don't know, cuz I have two sets of those files, one in my user's directory and one in the iTunes folder on the external drive..

    You are surprised that Mac OS X 10.7 is running on hardware that Apple officially supports and says it will run on?  I find your logic unbelievably falwed.
    Not a single problem I've mentioned would be caused by "slow" CPU or "little" RAM.  How does iPhoto not bringing up the file dialog box have anything even remotely to do with the speed of the CPU or the quantity of RAM?  If what you say is true, then yes, performance would be bad, but it wouldn't be selectively bad.  I stated that apps such as Reason 5 run perfectly fine, even really well.  I can load 30-50 racks and synths and not even reach 50% of CPU utilization, Reason never freezes, etc.  DVD playback never freezes as well!!  Your logic is horrendously flawed, it seems you don't know much about computer architecture   Thus, unfortunately, your reply was of absolutely no help, sorry.
    p.s. the external disk spin up problem freezing up the entire graphical interface also has nothing to do with either my CPU/RAM or the OS version I'm running.  It is a known crippleware factor of Mac OS X that at least a thousand people are complaining about online and Apple has done nothing about! 

Maybe you are looking for

  • HELP: ERROR MSGS WHEN TRY TO SYNC IPOD 160GB CLASSIC

    I have done the following: Installed the latest version of Itunes Restored my IPOD (multiple times) Restarted computer Tried both USB ports on CPU Made sure I check Sync Music and Apply Results: A few albums are on IPOD Get error message above Thanks

  • MagicTrackPad Problem with Mavericks

    Howdy, All to often my Magic Track Pad locks up shortly after waking from sleep. Most of time I can shut it off, wait untilthat is recognized (up to a minute) and then turn it back on. But there are times when it is locked up so badly I've had to re-

  • Value date while posting from FRFT_B

    Hi When I am posting from FRFT_B (Bank to Bank transfer) value is date is displaying in this transaction(FRFT_B) but when we posted document , value date is not updating in FI document hence we are not able to see value date. Can some one hlpe is the

  • Counting rule doesnot exist.

    Hello, When I try to enter time I'm getting thefollowing error after I press enter "counting rule 08/2/00/.. doesnot exist". In Data Entry Area I'm entering the details for Activity type, Rec. order, Attendance/Absence Type and the hours for the week

  • Correct binding order in a Cluster with logical switches, NIC teams, and vNICs on the host.

    I have seen many recommendations to set the network binding order on you Hyper-V hosts to something similar to: Management NIC Cluster NICs iSCSI NICS However, all of  these recommendations are for scenarios where the NICs are all physical NICs in th