BI Environments

Hello All,
Just curious to know what kind of environment you are using for testing out new oracle products?
I used to install things in my laptop/virtual server etc, but day by day oracle's products are hogging my memory and space and I cannot meaningfully even start couple of servers in tandem.
-- I typically use OEL and install applications on it.
-- I use pretty decent hardware with at least 8GB RAM and 250+GB of storage space
-- I am not even able to start the Demo VM's or if I start, the machine is so slow that it's impossible to work on it!
-- I have tried installing from scratch in Vbox and VMware etc, no improvements..
-- I do not want to test my codes/new developments in enterprise environments that we have installed.
I am having trouble with the latest releases as they are even bigger. I am not even able to complete installations etc..
Questions:
--What kind of environments do you guys use to test out new things/applications etc?
--What OS do you guys use.
--Do you guys have memory/hardware related problems in installing and configuring a 11.1.1.7 environment?
--How do you test applications in tandem, like oracle BI and Endeca or Any ETL tool in BI Apps+Oracle BI etc?
Any suggestions/solutions welcome.

--What kind of environments do you guys use to test out new things/applications etc?
Laptop
--What OS do you guys use.
Windows 7 64-bit
--Do you guys have memory/hardware related problems in installing and configuring a 11.1.1.7 environment?
With 8GB RAM , 1 TB HDD , I5 BI environment is running fine
--How do you test applications in tandem, like oracle BI and Endeca or Any ETL tool in BI Apps+Oracle BI etc?
For the Apps environemnt if you can pay 300$ for Oracle APPS (VISION) on any of your server and use to run loads
I use above configuration for 7.9.6.4 but with the latest version (ODI + 11.1.1.7.1) it is very slow so upgraded to 16GB RAM now it is going smooth
Thanks,
Saichand

Similar Messages

  • Best practice to move things between various environments in SharePoint 2013

    Hi All SharePoint Gurus!! - I was using SP deployment wizard to move Sites/lists/libraries/items etc. using SP Deployment Wizard (spdeploymentwizard.codeplex.com) in SP 2010. We just upgraded to SP 2013. I have few Lists and Libraries that I need to push
    into the Staging 2013 and Production 2013 environment from Development 2013 environment. SP Deployment Wizard  is throwing error right from the startup. I checked SP 2013 provides granular backups but is restricted to Lists/Library level. Could anybody
    let me know if SP Deployment Wizard works for 2013? I love that tool. Also, Whats the best practice to move things between various environments?
    Regards,
    Khushi
    Khushi

    Hi Khushi,
    I want to let you know that we built
    SharePoint Migration tool
    MetaVis Migrator that can copy and migrate to and from on-premise or hosted SharePoint sites. The tool can copy entire
    sites with sub-site hierarchies, content types, fields, lists, list views, documents, items with attachments, look and feel elements, permissions, groups and other objects - all together on at any level of granularity (for
    example, just lists or just list views or selected items). The tool preserves created / modified properties, all metadata and versions. It looks like Windows Explorer with copy/paste and drag-n-drop functions so it is easy to learn. It does not require any
    server side installations so you can do everything using your computer or any other server. The tool can copy the complete sites or just individual lists or even selected items. The tool also supports incremental or delta copy based on the previous migrations.
    The tool also includes Pre-Migration Analysis that helps to identify customizations.
    Free trial is available:
    http://www.metavistech.com . Feel free to contact us.
    Good luck with your migration project,
    Mark

  • CBO generating different plans for the same data in similar Environments

    Hi All
    I have been trying to compare an SQL from 2 different but similar environments build of the same hardware specs .The issue I am facing is environment A, the query executes in less than 2 minutes with plan mostly showing full table scans and hash join whereas in environment B(problematic), it times out after 2 hours with an error of unable to extend table space . The statistics are up to date in both environments for both tables and indexes . System parameters are exactly similar(default oracle for except for multiblock_read_count ).
    Both Environment have same db parameter for db_file_multiblock_read_count(16), optimizer(refer below),hash_area_size (131072),pga_aggregate_target(1G),db_block_size(8192) etc . SREADTIM, MREADTIM, CPUSPEED, MBRC are all null in aux_stats in both environment because workload was never collected i believe.
    Attached is details about the SQL with table stats, SQL and index stats my main concern is CBO generating different plans for the similar data and statistics and same hardware and software specs. Is there any thing else I should consider .I generally see environment B being very slow and always plans tend to nested loops and index scan whereas what we really need is a sensible FTS in many cases. One of the very surprising thing is METER_CONFIG_HEADER below which has just 80 blocks of data is being asked for index scan.
    show parameter optimizer
    optimizer_dynamic_sampling integer 2
    optimizer_features_enable string 10.2.0.4
    optimizer_index_caching integer 0
    optimizer_index_cost_adj integer 100
    optimizer_mode string ALL_ROWS
    optimizer_secure_view_merging boolean TRUE
    **Environment**
    Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bi
    PL/SQL Release 10.2.0.4.0 - Production
    CORE     10.2.0.4.0     Production
    TNS for Solaris: Version 10.2.0.4.0 - Production
    NLSRTL Version 10.2.0.4.0 - Production
    Note: : There are slight difference in the no of records in the attached sheet.However, I wanted to tell that i have tested with exact same data and was getting similar results but I couldn't retain the data untill collecting the details in the attachment
    TEST1     COMPARE TABLE LEVE STATS  used by CBO          
    ENVIRONMENT A
    TABLE_NAME     NUM_ROWS     BLOCKS     LAST_ANALYZED
    ASSET     3607425     167760     5/02/2013 22:11
    METER_CONFIG_HEADER     3658     80     5/01/2013 0:07
    METER_CONFIG_ITEM     32310     496     5/01/2013 0:07
    NMI     1899024     33557     18/02/2013 10:55
    REGISTER     4830153     101504     18/02/2013 9:57
    SDP_LOGICAL_ASSET     1607456     19137     18/02/2013 15:48
    SDP_LOGICAL_REGISTER     5110781     78691     18/02/2013 9:56
    SERVICE_DELIVERY_POINT     1425890     42468     18/02/2013 13:54
    ENVIRONMENT B
    TABLE_NAME     NUM_ROWS     BLOCKS     LAST_ANALYZED
    ASSET     4133939     198570     16/02/2013 10:02
    METER_CONFIG_HEADER     3779     80     16/02/2013 10:55
    METER_CONFIG_ITEM     33720     510     16/02/2013 10:55
    NMI     1969000     33113     16/02/2013 10:58
    REGISTER     5837874     120104     16/02/2013 11:05
    SDP_LOGICAL_ASSET     1788152     22325     16/02/2013 11:06
    SDP_LOGICAL_REGISTER     6101934     91088     16/02/2013 11:07
    SERVICE_DELIVERY_POINT     1447589     43804     16/02/2013 11:11
    TEST ITEM 2     COMPARE INDEX STATS  used by CBO          
    ENVIRONMENT A
    TABLE_NAME     INDEX_NAME     UNIQUENESS     BLEVEL     LEAF_BLOCKS     DISTINCT_KEYS     AVG_LEAF_BLOCKS_PER_KEY     AVG_DATA_BLOCKS_PER_KEY     CLUSTERING_FACTOR     NUM_ROWS
    ASSET     IDX_AST_DEVICE_CATEGORY_SK     NONUNIQUE     2     9878     67     147     12982     869801     3553095
    ASSET     IDX_A_SAPINTLOGDEV_SK     NONUNIQUE     2     7291     2747     2     639     1755977     3597916
    ASSET     SYS_C00102592     UNIQUE     2     12488     3733831     1     1     3726639     3733831
    METER_CONFIG_HEADER     SYS_C0092052     UNIQUE     1     12     3670     1     1     3590     3670
    METER_CONFIG_ITEM     SYS_C0092074     UNIQUE     1     104     32310     1     1     32132     32310
    NMI     IDX_NMI_ID     NONUNIQUE     2     6298     844853     1     2     1964769     1965029
    NMI     IDX_NMI_ID_NK     NONUNIQUE     2     6701     1923072     1     1     1922831     1923084
    NMI     IDX_NMI_STATS     NONUNIQUE     1     106     4     26     52     211     211
    REGISTER     REG_EFFECTIVE_DTM     NONUNIQUE     2     12498     795     15     2899     2304831     4711808
    REGISTER     SYS_C00102653     UNIQUE     2     16942     5065660     1     1     5056855     5065660
    SDP_LOGICAL_ASSET     IDX_SLA_SAPINTLOGDEV_SK     NONUNIQUE     2     3667     1607968     1     1     1607689     1607982
    SDP_LOGICAL_ASSET     IDX_SLA_SDP_SK     NONUNIQUE     2     3811     668727     1     2     1606204     1607982
    SDP_LOGICAL_ASSET     SYS_C00102665     UNIQUE     2     5116     1529606     1     1     1528136     1529606
    SDP_LOGICAL_REGISTER     SYS_C00102677     UNIQUE     2     17370     5193638     1     1     5193623     5193638
    SERVICE_DELIVERY_POINT     IDX_SDP_NMI_SK     NONUNIQUE     2     4406     676523     1     2     1423247     1425890
    SERVICE_DELIVERY_POINT     IDX_SDP_SAP_INT_NMI_SK     NONUNIQUE     2     7374     676523     1     2     1458238     1461108
    SERVICE_DELIVERY_POINT     SYS_C00102687     UNIQUE     2     4737     1416207     1     1     1415022     1416207
    ENVIRONMENT B
    TABLE_NAME     INDEX_NAME     UNIQUENESS     BLEVEL     LEAF_BLOCKS     DISTINCT_KEYS     AVG_LEAF_BLOCKS_PER_KEY     AVG_DATA_BLOCKS_PER_KEY     CLUSTERING_FACTOR     NUM_ROWS
    ASSET     IDX_AST_DEVICE_CATEGORY_SK     NONUNIQUE     2     8606     121     71     16428     1987833     4162257
    ASSET     IDX_A_SAPINTLOGDEV_SK     NONUNIQUE     2     8432     1780146     1     1     2048170     4162257
    ASSET     SYS_C00116157     UNIQUE     2     13597     4162263     1     1     4158759     4162263
    METER_CONFIG_HEADER     SYS_C00116570     UNIQUE     1     12     3779     1     1     3734     3779
    METER_CONFIG_ITEM     SYS_C00116592     UNIQUE     1     107     33720     1     1     33459     33720
    NMI     IDX_NMI_ID     NONUNIQUE     2     6319     683370     1     2     1970460     1971313
    NMI     IDX_NMI_ID_NK     NONUNIQUE     2     6597     1971293     1     1     1970771     1971313
    NMI     IDX_NMI_STATS     NONUNIQUE     1     98     48     2     4     196     196
    REGISTER     REG_EFFECTIVE_DTM     NONUNIQUE     2     15615     1273     12     2109     2685924     5886582
    REGISTER     SYS_C00116748     UNIQUE     2     19533     5886582     1     1     5845565     5886582
    SDP_LOGICAL_ASSET     IDX_SLA_SAPINTLOGDEV_SK     NONUNIQUE     2     4111     1795084     1     1     1758441     1795130
    SDP_LOGICAL_ASSET     IDX_SLA_SDP_SK     NONUNIQUE     2     4003     674249     1     2     1787987     1795130
    SDP_LOGICAL_ASSET     SYS_C004520     UNIQUE     2     5864     1795130     1     1     1782147     1795130
    SDP_LOGICAL_REGISTER     SYS_C004539     UNIQUE     2     20413     6152850     1     1     6073059     6152850
    SERVICE_DELIVERY_POINT     IDX_SDP_NMI_SK     NONUNIQUE     2     3227     660649     1     2     1422572     1447803
    SERVICE_DELIVERY_POINT     IDX_SDP_SAP_INT_NMI_SK     NONUNIQUE     2     6399     646257     1     2     1346948     1349993
    SERVICE_DELIVERY_POINT     SYS_C00128706     UNIQUE     2     4643     1447946     1     1     1442796     1447946
    TEST ITEM 3     COMPARE PLANS     
    ENVIRONMENT A
    Plan hash value: 4109575732                                             
    | Id  | Operation                       | Name                   | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     |                                             
    |   0 | SELECT STATEMENT                |                        |    13 |  2067 |       |   135K  (2)| 00:27:05 |                                             
    |   1 |  HASH UNIQUE                    |                        |    13 |  2067 |       |   135K  (2)| 00:27:05 |                                             
    |*  2 |   HASH JOIN                     |                        |    13 |  2067 |       |   135K  (2)| 00:27:05 |                                             
    |*  3 |    HASH JOIN                    |                        |     6 |   900 |       |   135K  (2)| 00:27:04 |                                             
    |*  4 |     HASH JOIN ANTI              |                        |     1 |   137 |       |   135K  (2)| 00:27:03 |                                             
    |*  5 |      TABLE ACCESS BY INDEX ROWID| NMI                    |     1 |    22 |       |     5   (0)| 00:00:01 |                                             
    |   6 |       NESTED LOOPS              |                        |     1 |   131 |       | 95137   (2)| 00:19:02 |                                             
    |*  7 |        HASH JOIN                |                        |     1 |   109 |       | 95132   (2)| 00:19:02 |                                             
    |*  8 |         TABLE ACCESS FULL       | ASSET                  | 36074 |  1021K|       | 38553   (2)| 00:07:43 |                                             
    |*  9 |         HASH JOIN               |                        | 90361 |  7059K|  4040K| 56578   (2)| 00:11:19 |                                             
    |* 10 |          HASH JOIN              |                        | 52977 |  3414K|  2248K| 50654   (2)| 00:10:08 |                                             
    |* 11 |           HASH JOIN             |                        | 39674 |  1782K|       | 40101   (2)| 00:08:02 |                                             
    |* 12 |            TABLE ACCESS FULL    | REGISTER               | 39439 |  1232K|       | 22584   (2)| 00:04:32 |                                             
    |* 13 |            TABLE ACCESS FULL    | SDP_LOGICAL_REGISTER   |  4206K|    56M|       | 17490   (2)| 00:03:30 |                                             
    |* 14 |           TABLE ACCESS FULL     | SERVICE_DELIVERY_POINT |   675K|    12M|       |  9412   (2)| 00:01:53 |                                             
    |* 15 |          TABLE ACCESS FULL      | SDP_LOGICAL_ASSET      |  1178K|    15M|       |  4262   (2)| 00:00:52 |                                             
    |* 16 |        INDEX RANGE SCAN         | IDX_NMI_ID_NK          |     2 |       |       |     2   (0)| 00:00:01 |                                             
    |  17 |      VIEW                       |                        | 39674 |   232K|       | 40101   (2)| 00:08:02 |                                             
    |* 18 |       HASH JOIN                 |                        | 39674 |  1046K|       | 40101   (2)| 00:08:02 |                                             
    |* 19 |        TABLE ACCESS FULL        | REGISTER               | 39439 |   500K|       | 22584   (2)| 00:04:32 |                                             
    |* 20 |        TABLE ACCESS FULL        | SDP_LOGICAL_REGISTER   |  4206K|    56M|       | 17490   (2)| 00:03:30 |                                             
    |* 21 |     TABLE ACCESS FULL           | METER_CONFIG_HEADER    |  3658 | 47554 |       |    19   (0)| 00:00:01 |                                             
    |* 22 |    TABLE ACCESS FULL            | METER_CONFIG_ITEM      |  7590 | 68310 |       |   112   (2)| 00:00:02 |                                             
    Predicate Information (identified by operation id):                                             
       2 - access("METER_CONFIG_HEADER_SK"="METER_CONFIG_HEADER_SK")                                             
       3 - access("NETWORK_TARIFF_CD"="NETWORK_TARIFF_CD")                                             
       4 - access("SERVICE_DELIVERY_POINT_SK"="TMP"."SERVICE_DELIVERY_POINT_SK")                                             
       5 - filter("ROW_CURRENT_IND"='Y' AND ("NMI_STATUS_CD"='A' OR "NMI_STATUS_CD"='D'))                                             
       7 - access("ASSET_CD"="EQUIP_CD" AND "SAP_INT_LOG_DEVICE_SK"="SAP_INT_LOG_DEVICE_SK")                                             
       8 - filter("ROW_CURRENT_IND"='Y')                                             
       9 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")                                             
      10 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")                                             
      11 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")                                             
      12 - filter("REGISTER_TYPE_CD"='C' AND (SUBSTR("REGISTER_ID_CD",1,1)='4' OR                                              
                  SUBSTR("REGISTER_ID_CD",1,1)='5' OR SUBSTR("REGISTER_ID_CD",1,1)='6') AND "ROW_CURRENT_IND"='Y')                                             
      13 - filter("ROW_CURRENT_IND"='Y')                                             
      14 - filter("ROW_CURRENT_IND"='Y')                                             
      15 - filter("ROW_CURRENT_IND"='Y')                                             
      16 - access("NMI_SK"="NMI_SK")                                             
      18 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")                                             
      19 - filter("REGISTER_TYPE_CD"='C' AND (SUBSTR("REGISTER_ID_CD",1,1)='1' OR                                              
                  SUBSTR("REGISTER_ID_CD",1,1)='2' OR SUBSTR("REGISTER_ID_CD",1,1)='3') AND "ROW_CURRENT_IND"='Y')                                             
      20 - filter("ROW_CURRENT_IND"='Y')                                             
      21 - filter("ROW_CURRENT_IND"='Y')                                             
      22 - filter("ROW_CURRENT_IND"='Y' AND "CONROL_REGISTER"='X')                                             
    ENVIRONMENT B
    Plan hash value: 2826260434                                   
    | Id  | Operation                            | Name                   | Rows  | Bytes | Cost (%CPU)| Time     |                                   
    |   0 | SELECT STATEMENT                     |                        |     1 |   181 |   103K  (2)| 00:20:47 |                                   
    |   1 |  HASH UNIQUE                         |                        |     1 |   181 |   103K  (2)| 00:20:47 |                                   
    |*  2 |   HASH JOIN ANTI                     |                        |     1 |   181 |   103K  (2)| 00:20:47 |                                   
    |*  3 |    HASH JOIN                         |                        |     1 |   176 | 56855   (2)| 00:11:23 |                                   
    |*  4 |     HASH JOIN                        |                        |     1 |   163 | 36577   (2)| 00:07:19 |                                   
    |*  5 |      TABLE ACCESS BY INDEX ROWID     | ASSET                  |     1 |    44 |     4   (0)| 00:00:01 |                                   
    |   6 |       NESTED LOOPS                   |                        |     1 |   131 |  9834   (2)| 00:01:59 |                                   
    |   7 |        NESTED LOOPS                  |                        |     1 |    87 |  9830   (2)| 00:01:58 |                                   
    |   8 |         NESTED LOOPS                 |                        |     1 |    74 |  9825   (2)| 00:01:58 |                                   
    |*  9 |          HASH JOIN                   |                        |     1 |    52 |  9820   (2)| 00:01:58 |                                   
    |* 10 |           TABLE ACCESS BY INDEX ROWID| METER_CONFIG_HEADER    |     1 |    14 |     1   (0)| 00:00:01 |                                   
    |  11 |            NESTED LOOPS              |                        |     1 |    33 |   116   (2)| 00:00:02 |                                   
    |* 12 |             TABLE ACCESS FULL        | METER_CONFIG_ITEM      |     1 |    19 |   115   (2)| 00:00:02 |                                   
    |* 13 |             INDEX RANGE SCAN         | SYS_C00116570          |     1 |       |     1   (0)| 00:00:01 |                                   
    |* 14 |           TABLE ACCESS FULL          | SERVICE_DELIVERY_POINT |   723K|    13M|  9699   (2)| 00:01:57 |                                   
    |* 15 |          TABLE ACCESS BY INDEX ROWID | NMI                    |     1 |    22 |     5   (0)| 00:00:01 |                                   
    |* 16 |           INDEX RANGE SCAN           | IDX_NMI_ID_NK          |     2 |       |     2   (0)| 00:00:01 |                                   
    |* 17 |         TABLE ACCESS BY INDEX ROWID  | SDP_LOGICAL_ASSET      |     1 |    13 |     5   (0)| 00:00:01 |                                   
    |* 18 |          INDEX RANGE SCAN            | IDX_SLA_SDP_SK         |     2 |       |     2   (0)| 00:00:01 |                                   
    |* 19 |        INDEX RANGE SCAN              | IDX_A_SAPINTLOGDEV_SK  |     2 |       |     2   (0)| 00:00:01 |                                   
    |* 20 |      TABLE ACCESS FULL               | REGISTER               | 76113 |  2378K| 26743   (2)| 00:05:21 |                                   
    |* 21 |     TABLE ACCESS FULL                | SDP_LOGICAL_REGISTER   |  5095K|    63M| 20245   (2)| 00:04:03 |                                   
    |  22 |    VIEW                              |                        | 90889 |   443K| 47021   (2)| 00:09:25 |                                   
    |* 23 |     HASH JOIN                        |                        | 90889 |  2307K| 47021   (2)| 00:09:25 |                                   
    |* 24 |      TABLE ACCESS FULL               | REGISTER               | 76113 |   966K| 26743   (2)| 00:05:21 |                                   
    |* 25 |      TABLE ACCESS FULL               | SDP_LOGICAL_REGISTER   |  5095K|    63M| 20245   (2)| 00:04:03 |                                   
    Predicate Information (identified by operation id):                                   
       2 - access("SERVICE_DELIVERY_POINT_SK"="TMP"."SERVICE_DELIVERY_POINT_SK")                                   
       3 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK" AND                                    
                  "SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")                                   
       4 - access("ASSET_CD"="EQUIP_CD")                                   
       5 - filter("ROW_CURRENT_IND"='Y')                                   
       9 - access("NETWORK_TARIFF_CD"="NETWORK_TARIFF_CD")                                   
      10 - filter("ROW_CURRENT_IND"='Y')                                   
      12 - filter("ROW_CURRENT_IND"='Y' AND "CONROL_REGISTER"='X')                                   
      13 - access("METER_CONFIG_HEADER_SK"="METER_CONFIG_HEADER_SK")                                   
      14 - filter("ROW_CURRENT_IND"='Y')                                   
      15 - filter("ROW_CURRENT_IND"='Y' AND ("NMI_STATUS_CD"='A' OR "NMI_STATUS_CD"='D'))                                   
      16 - access("NMI_SK"="NMI_SK")                                   
      17 - filter("ROW_CURRENT_IND"='Y')                                   
      18 - access("SERVICE_DELIVERY_POINT_SK"="SERVICE_DELIVERY_POINT_SK")                                   
      19 - access("SAP_INT_LOG_DEVICE_SK"="SAP_INT_LOG_DEVICE_SK")                                   
      20 - filter((SUBSTR("REGISTER_ID_CD",1,1)='4' OR SUBSTR("REGISTER_ID_CD",1,1)='5' OR                                    
                  SUBSTR("REGISTER_ID_CD",1,1)='6') AND "REGISTER_TYPE_CD"='C' AND "ROW_CURRENT_IND"='Y')                                   
      21 - filter("ROW_CURRENT_IND"='Y')                                   
      23 - access("SAP_INT_LOGICAL_REGISTER_SK"="SAP_INT_LOGICAL_REGISTER_SK")                                   
      24 - filter((SUBSTR("REGISTER_ID_CD",1,1)='1' OR SUBSTR("REGISTER_ID_CD",1,1)='2' OR                                    
                  SUBSTR("REGISTER_ID_CD",1,1)='3') AND "REGISTER_TYPE_CD"='C' AND "ROW_CURRENT_IND"='Y')                                   
      25 - filter("ROW_CURRENT_IND"='Y')Edited by: abhilash173 on Feb 24, 2013 9:16 PM
    Edited by: abhilash173 on Feb 24, 2013 9:18 PM

    Hi Paul,
    I misread your question initially .The system stats are outdated in both ( same result as seen from aux_stats) .I am not a DBA and do not have access to gather system stats fresh.
    select * from sys.aux_stats$
    SNAME     PNAME     PVAL1     PVAL2
    SYSSTATS_INFO     STATUS     NULL     COMPLETED
    SYSSTATS_INFO     DSTART     NULL     02-16-2011 15:24
    SYSSTATS_INFO     DSTOP     NULL     02-16-2011 15:24
    SYSSTATS_INFO     FLAGS     1     NULL
    SYSSTATS_MAIN     CPUSPEEDNW     1321.20523     NULL
    SYSSTATS_MAIN     IOSEEKTIM     10     NULL
    SYSSTATS_MAIN     IOTFRSPEED     4096     NULL
    SYSSTATS_MAIN     SREADTIM     NULL     NULL
    SYSSTATS_MAIN     MREADTIM     NULL     NULL
    SYSSTATS_MAIN     CPUSPEED     NULL     NULL
    SYSSTATS_MAIN     MBRC     NULL     NULL
    SYSSTATS_MAIN     MAXTHR     NULL     NULL
    SYSSTATS_MAIN     SLAVETHR     NULL     NULL

  • Increase Performance and ROI for SQL Server Environments

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

    May 2015
    Explore
    The Buzz from Microsoft Ignite 2015
    NetApp was in full force at the recent Microsoft Ignite show in Chicago, talking about solutions for hybrid cloud, and our proven solutions for Microsoft SQL Server and other Microsoft applications.
    Hot topics at the NetApp booth included:
    OnCommand® Shift. A revolutionary technology that lets you move virtual machines back and forth between VMware and Hyper-V environments in minutes.
    Azure Site Recovery to NetApp Private Storage. Replicate on-premises SAN-based applications to NPS for disaster recovery in the Azure cloud.
    These tools give you greater flexibility for managing and protecting important business applications.
    Chris Lemmons
    Director, EIS Technical Marketing, NetApp
    If your organization runs databases such as Microsoft SQL Server and Oracle DB, you probably know that these vendors primarily license their products on a "per-core" basis. Microsoft recently switched to "per-core" rather than "per-socket" licensing for SQL Server 2012 and 2014. This change can have a big impact on the total cost of operating a database, especially as core counts on new servers continue to climb. It turns out that the right storage infrastructure can drive down database costs, increase productivity, and put your infrastructure back in balance.
    In many customer environments, NetApp has noticed that server CPU utilization is low—often on the order of just 20%. This is usually the result of I/O bottlenecks. Server cores have to sit and wait for I/O from hard disk drives (HDDs). We've been closely studying the impact of all-flash storage on SQL Server environments that use HDD-based storage systems. NetApp® All Flash FAS platform delivers world-class performance for SQL Server plus the storage efficiency, application integration, nondisruptive operations, and data protection of clustered Data ONTAP®, making it ideal for SQL Server environments.
    Tests show that All Flash FAS can drive up IOPS and database server CPU utilization by as much as 4x. And with a 95% reduction in latency, you can achieve this level of performance with half as many servers. This reduces the number of servers you need and the number of cores you have to license, driving down costs by 50% or more and paying back your investment in flash in as little as six months.
    Figure 1) NetApp All Flash FAS increases CPU utilization on your SQL Server database servers, lowering costs.
    Source: NetApp, 2015
    Whether you're running one of the newer versions of SQL Server or facing an upgrade of an earlier version, you can't afford not to take a second look at your storage environment.
    End of Support for Microsoft SQL Server 2005 is Rapidly Approaching
    Microsoft has set the end of extended support for SQL Server 2005 for April 2016—less than a year away. With support for Microsoft Windows 2003 ending in July 2015, time may already be running short.
    If you're running Windows Server 2003, new server hardware is almost certainly needed when you upgrade SQL Server. Evaluate your server and storage options now to get costs under control.
    Test Methodology
    To test the impact of flash on SQL Server performance, we replaced a legacy HDD-based storage system with an All Flash FAS AFF8080 EX. The legacy system was configured with almost 150 HDDs, a typical configuration for HDD storage supporting SQL Server. The AFF8080 EX used just 48 SSDs.
    Table 1) Components used in testing.
    Test Configuration Components
    Details
    SQL Server 2014 servers
    Fujitsu RX300
    Server operating system
    Microsoft Windows 2012 R2 Standard Edition
    SQL Server database version
    Microsoft SQL Server 2014 Enterprise Edition
    Processors per server
    2 6-core Xeon E5-2630 at 2.30 GHz
    Fibre channel network
    8Gb FC with multipathing
    Storage controller
    AFF8080 EX
    Data ONTAP version
    Clustered Data ONTAP® 8.3.1
    Drive number and type
    48 SSD
    Source: NetApp, 2015
    The test configuration consisted of 10 database servers connected through fibre channel to both the legacy storage system and the AFF8080 EX. Each of the 10 servers ran SQL Server 2014 Enterprise Edition.
    The publicly available HammerDB workload generator was used to drive an OLTP-like workload simultaneously from each of the 10 database servers to storage. We first directed the workload to the legacy storage array to establish a baseline, increasing the load to the point where read latency consistently exceeded 20ms.
    That workload was then directed at the AFF8080 EX. The change in storage resulted in an overall 20x reduction in read latency, a greater than 4x improvement in IOPS, and a greater than 4x improvement in database server CPU utilization.
    Figure 2) NetApp All Flash FAS increases IOPS and server CPU utilization and lowers latency.
    Source: NetApp, 2015
    In other words, the database servers are able to process four times as many IOPS with dramatically lower latency. CPU utilization goes up accordingly because the servers are processing 4x the work per unit time.
    The All Flash FAS system still had additional headroom under this load.
    Calculating the Savings
    Let's look at what this performance improvement means for the total cost of running SQL Server 2014 over a 3-year period. To do the analysis we used NetApp Realize, a storage modeling and financial analysis tool designed to help quantify the value of NetApp solutions and products. NetApp sales teams and partners use this tool to assist with return on investment (ROI) calculations.
    The calculation includes the cost of the AFF8080 EX, eliminates the costs associated with the existing storage system, and cuts the total number of database servers from 10 to five. This reduces SQL Server licensing costs by 50%. The same workload was run with five servers and achieved the same results. ROI analysis is summarized in Table 2.
    Table 2) ROI from replacing an HDD-based storage system with All Flash FAS, thereby cutting server and licensing costs in half.
    Value
    Analysis Results
    ROI
    65%
    Net present value (NPV)
    $950,000
    Payback period
    six months
    Total cost reduction
    More than $1 million saved over a 3-year analysis period compared to the legacy storage system
    Savings on power, space, and administration
    $40,000
    Additional savings due to nondisruptive operations benefits (not included in ROI)
    $90,000
    Source: NetApp, 2015
    The takeaway here is that you can replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs, with the majority of the savings derived from the reduction in SQL Server licensing costs.
    Replace your existing storage with All Flash FAS and get a big performance bump while substantially reducing your costs.
    Maximum SQL Server 2014 Performance
    In addition to the ROI analysis, we also measured the maximum performance of the AFF8080 EX with SQL Server 2014. A load-generation tool was used to simulate an industry-standard TPC-E OLTP workload against an SQL Server 2014 test configuration.
    A two-node AFF8080 EX achieved a maximum throughput of 322K IOPS at just over 1ms latency. For all points other than the maximum load point, latency was consistently under 1ms and remained under 0.8ms up to 180K IOPS.
    Data Reduction and Storage Efficiency
    In addition to performance testing, we looked at the overall storage efficiency savings of our SQL Server database implementation. The degree of compression that can be achieved is dependent on the actual data that is written and stored in the database. For this environment, inline compression was effective. Deduplication, as is often the case in database environments, provided little additional storage savings and was not enabled.
    For the test data used in the maximum performance test, we measured a compression ratio of 1.5:1. We also tested inline compression on a production SQL Server 2014 data set to further validate these results and saw a 1.8:1 compression ratio.
    Space-efficient NetApp Snapshot® copies provide additional storage efficiency benefits for database environments. Unlike snapshot methods that use copy-on-write, there is no performance penalty; unlike full mirror copies, NetApp Snapshot copies use storage space sparingly. Snapshot copies only consume a small amount of storage space for metadata and additional incremental space is consumed as block-level changes occur. In a typical real-world SQL Server deployment on NetApp storage, database volume Snapshot copies are made every two hours.
    First introduced more than 10 years ago, NetApp FlexClone® technology also plays an important role in SQL Server environments. Clones are fully writable, and, similar to Snapshot copies, only consume incremental storage capacity. With FlexClone, you can create as many copies of production data as you need for development and test, reporting, and so on. Cloning is a great way to support the development and test work needed when upgrading from an earlier version of SQL Server. You'll sometimes see these types of capabilities referred to as "copy data management."
    A Better Way to Run Enterprise Applications
    The performance benefits that all-flash storage can deliver for database environments are significant: more IOPS, lower latency, and an end to near-constant performance tuning.
    If you think the performance acceleration that comes with all-flash storage is cost prohibitive, think again. All Flash FAS doesn't just deliver a performance boost, it changes the economics of your operations, paying for itself with thousands in savings on licensing and server costs. In terms of dollars per IOPS, All Flash FAS is extremely economical relative to HDD.
    And, because All Flash FAS runs NetApp clustered Data ONTAP, it delivers the most complete environment to support SQL Server and all your enterprise applications with capabilities that include comprehensive storage efficiency, integrated data protection, and deep integration for your applications.
    For complete details on this testing look for NetApp TR-4303, which will be available in a few weeks. Stay tuned to Tech OnTap for more information as NetApp continues to run benchmarks with important server workloads including Oracle DB and server virtualization.
    Learn more about NetApp solutions for SQL Server and NetApp All-flash solutions.
    Quick Links
    Tech OnTap Community
    Archive
    PDF

  • What is the best way to export and import updates between WSUS servers for specific KB's with environments that have different data sets?

    We are under strict regulations and monitoring at my company. Because of this, we have to define what updates are being installed during a patch cycle by KB Article. Two production networks and one test network. The installed updates have to be the exact
    same in the test environment, run for a few days with no problems, and then we install them in our production networks. The issue is, not all of the software in the environments is identical.
    I need to figure out a way to take failed/needed updates from production that are unapproved, export that list so I can approve all of those updates in the test env.. Then once I approve those and any other needed updates in test, I need to be able to export
    all approved updates from the test env (which includes the needed ones from production) and import those approvals in production.
    How can this be done?
    Thanks!

    the easiest way will be to use a single wsus infrastructre for all your environments.
    Tis blog article explains how to duplicate the approvals over groups:
    https://thwack.solarwinds.com/community/application-and-server_tht/patchzone/blog/2013/08/06/duplicating-approvals-from-a-test-group-to-a-production-group
    MCP/MCSA/MCTS/MCITP

  • Error while running BIP report, in one of our ST environments

    we are getting this exception while running one of our BIP report(RTF template).
    the same code is working in the other ST environments
    NOTE: we observed some special cases
    if the report contain any Graph(chart) then this error is coming,
    with the plain report (means only text), we observe it working fine.
    could any one help, for debugging this issue
    we found this log in EM console.
    oracle.xdo.XDOException: java.lang.reflect.InvocationTargetException[[
         at oracle.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:1205)
         at oracle.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:276)
         at oracle.xdo.template.FOProcessor.generate(FOProcessor.java:1118)
         at oracle.xdo.servlet.RTFCoreProcessor.transform(RTFCoreProcessor.java:124)
         at oracle.xdo.servlet.CoreProcessor.process(CoreProcessor.java:434)
         at oracle.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:93)
         at oracle.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:1059)
         at oracle.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:624)
         at oracle.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:473)
         at oracle.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:445)
         at oracle.xdo.servlet.XDOServlet.doGet(XDOServlet.java:265)
         at oracle.xdo.servlet.XDOServlet.doPost(XDOServlet.java:297)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.security.wls.filter.SSOSessionSynchronizationFilter.doFilter(SSOSessionSynchronizationFilter.java:277)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.xdo.servlet.metadata.track.MostRecentFilter.doFilter(MostRecentFilter.java:65)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:122)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.xdo.servlet.init.InitCheckingFilter.doFilter(InitCheckingFilter.java:64)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:136)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
         at weblogic.work.ExecuteThread.run(ExecuteThread.java:178)
    Caused by: java.lang.reflect.InvocationTargetException
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at oracle.xdo.common.xml.XSLT10gR1.invokeProcessXSL(XSLT10gR1.java:917)
         at oracle.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:609)
         at oracle.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:327)
         at oracle.xdo.common.xml.XSLTWrapper.transform(XSLTWrapper.java:187)
         at oracle.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:1181)
         at oracle.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:276)
         at oracle.xdo.template.FOProcessor.createFO(FOProcessor.java:1974)
         at oracle.xdo.template.FOProcessor.generate(FOProcessor.java:1118)
         at oracle.xdo.servlet.RTFCoreProcessor.transform(RTFCoreProcessor.java:124)
         at oracle.xdo.servlet.CoreProcessor.process(CoreProcessor.java:434)
         at oracle.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:93)
         at oracle.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:1059)
         at oracle.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:624)
         at oracle.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:473)
         at oracle.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:445)
         at oracle.xdo.servlet.XDOServlet.doGet(XDOServlet.java:265)
         at oracle.xdo.servlet.XDOServlet.doPost(XDOServlet.java:296)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.StubSecurityHelper.invokeServlet(StubSecurityHelper.java:125)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:26)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.security.wls.filter.SSOSessionSynchronizationFilter.doFilter(SSOSessionSynchronizationFilter.java:276)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.xdo.servlet.metadata.track.MostRecentFilter.doFilter(MostRecentFilter.java:64)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:122)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.xdo.servlet.init.InitCheckingFilter.doFilter(InitCheckingFilter.java:63)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
         at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
         at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:413)
         at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:94)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:136)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
         at oracle.security.jps.util.JpsSubject.doAsPrivileged(JpsSubject.java:313)
         at oracle.security.jps.ee.util.JpsPlatformUtil.runJaasMode(JpsPlatformUtil.java:413)
         at oracle.security.jps.ee.http.JpsAbsFilter.runJaasMode(JpsAbsFilter.java:94)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:56)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.wrapRun(WebAppServletContext.java:3715)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.security.service.SecurityManager.runAs(SecurityManager.java:120)
         at weblogic.servlet.internal.WebAppServletContext.securedExecute(WebAppServletContext.java:2277)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
         at weblogic.work.ExecuteThread.execute(ExecuteThread.java:209)
         ... 1 more
    Caused by: oracle.xdo11g.xpath.XPathException: Extension function error: Error invoking 'chart_svg':'java.lang.NoClassDefFoundError: oracle/dss/graph/TickLabelCallback'
         at oracle.xdo11g.xslt.XSLStylesheet.flushErrors(XSLStylesheet.java:1850)
         at oracle.xdo11g.xslt.XSLStylesheet.execute(XSLStylesheet.java:616)
         at oracle.xdo11g.xslt.XSLStylesheet.execute(XSLStylesheet.java:551)
         at oracle.xdo11g.xslt.XSLProcessor.processXSL(XSLProcessor.java:345)
         at oracle.xdo11g.xslt.XSLProcessor.processXSL(XSLProcessor.java:194)
         at oracle.xdo11g.xslt.XSLProcessor.processXSL(XSLProcessor.java:230)
         at oracle.xdo11g.parser.v2.XSLProcessor.processXSL(XSLProcessor.java:124)
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
         at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
         at java.lang.reflect.Method.invoke(Method.java:597)
         at oracle.xdo.common.xml.XSLT10gR1.invokeProcessXSL(XSLT10gR1.java:920)
         at oracle.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:609)
         at oracle.xdo.common.xml.XSLT10gR1.transform(XSLT10gR1.java:328)
         at oracle.xdo.common.xml.XSLTWrapper.transform(XSLTWrapper.java:187)
         at oracle.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:1181)
         at oracle.xdo.template.fo.util.FOUtility.generateFO(FOUtility.java:276)
         at oracle.xdo.template.FOProcessor.generate(FOProcessor.java:1118)
         at oracle.xdo.servlet.RTFCoreProcessor.transform(RTFCoreProcessor.java:124)
         at oracle.xdo.servlet.CoreProcessor.process(CoreProcessor.java:434)
         at oracle.xdo.servlet.CoreProcessor.generateDocument(CoreProcessor.java:93)
         at oracle.xdo.servlet.ReportImpl.renderBodyHTTP(ReportImpl.java:1059)
         at oracle.xdo.servlet.ReportImpl.renderReportBodyHTTP(ReportImpl.java:624)
         at oracle.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:473)
         at oracle.xdo.servlet.XDOServlet.writeReport(XDOServlet.java:445)
         at oracle.xdo.servlet.XDOServlet.doGet(XDOServlet.java:265)
         at oracle.xdo.servlet.XDOServlet.doPost(XDOServlet.java:297)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
         at javax.servlet.http.HttpServlet.service(HttpServlet.java:821)
         at weblogic.servlet.internal.StubSecurityHelper$ServletServiceAction.run(StubSecurityHelper.java:227)
         at weblogic.servlet.internal.ServletStubImpl.execute(ServletStubImpl.java:300)
         at weblogic.servlet.internal.TailFilter.doFilter(TailFilter.java:27)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.security.wls.filter.SSOSessionSynchronizationFilter.doFilter(SSOSessionSynchronizationFilter.java:277)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.xdo.servlet.metadata.track.MostRecentFilter.doFilter(MostRecentFilter.java:65)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.xdo.servlet.security.SecurityFilter.doFilter(SecurityFilter.java:122)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.xdo.servlet.init.InitCheckingFilter.doFilter(InitCheckingFilter.java:64)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.dms.servlet.DMSServletFilter.doFilter(DMSServletFilter.java:136)
         at weblogic.servlet.internal.FilterChainImpl.doFilter(FilterChainImpl.java:57)
         at oracle.security.jps.ee.http.JpsAbsFilter$1.run(JpsAbsFilter.java:111)
         at oracle.security.jps.ee.http.JpsAbsFilter.doFilter(JpsAbsFilter.java:161)
         at oracle.security.jps.ee.http.JpsFilter.doFilter(JpsFilter.java:71)
         at weblogic.servlet.internal.WebAppServletContext$ServletInvocationAction.run(WebAppServletContext.java:3681)
         at weblogic.security.acl.internal.AuthenticatedSubject.doAs(AuthenticatedSubject.java:321)
         at weblogic.servlet.internal.WebAppServletContext.execute(WebAppServletContext.java:2183)
         at weblogic.servlet.internal.ServletRequestImpl.run(ServletRequestImpl.java:1454)
         ... 1 more
    i observed some of the forums facing same issue but no use.
    BI Beans chart - 'chart_svg' error after migration to Solaris 10
    Re: Error while running displaying Charts
    Could not initialize class sun.awt.X11GraphicsEnvironment
    Edited by: 852973 on Dec 2, 2011 3:31 AM
    Edited by: 852973 on Dec 2, 2011 3:33 AM

    You might want to repost this in the Reports forum here on OTN to get a Reports Guru to help out with this.
    Here's the URL for the Reports Forum: Reports
    Thanks,
    Rob

  • My iPad 2 can't load new Yahoo emails from my home WiFi (but loads them fine in other WiFi environments and loads Gmail messages fines at home)

    My iPad 2 had no problems loading e-mails until about a month ago. Then it started periodically being unable to load new e-mails from one of my two Yahoo accounts (the primary one), but the secondary Yahoo account worked fine. (It says "Checking for mail..." and never stops checking). This only happens when I'm at home, using my home WiFi (a slow connection.) Now, for the past two weeks, about 95% of the time, neither of my Yahoo accounts works on the iPad, although I have no problem accessing these accounts on my iPod Touches or on my desktop Mac mini. If I take the iPad to my neighbor's house and use her WiFi, I can get the Yahoo mail. (She has cable/internet through Comcast; I have DSL.) I can access the Yahoo mail in public WiFi environments like Starbucks, Barnes & Noble, and the Apple store. I upgraded the operating system to ios 6.0.1, and the problem didn't go away. I've been to two Apple stores and seen two different "Geniuses" about this. The first one observed that although the iPad could load mail while in the store, it was doing so very, very slowly. She did a complete restore (erasing all of my apps, photos, etc.) but when I got back home, the problem resumed. The second Genius couldn't figure the problem out, either. He suggested I contact Yahoo, which I highly doubt will get me anywhere. I eventually added back all of my old apps, but then deleted them again in the past few days thinking that perhaps a persnickety app was causing a problem. No luck.

    Answering my own question here. It's fixed! Another user recommended I switch to another wireless network then back to mine. And it worked! Mail is sending again with no problem.

  • Office Web Apps for Hosted Environments

    Hi there,
    I have a some servers that host two different SharePoint Farms and currently one is connected to an OWA server -
    http://blogs.technet.com/b/justin_gao/archive/2013/06/30/configuring-office-web-apps-server-communication-using-https.aspx
    I want my second farm to use OWA too and was wondering if it is possible to use the same OWA server for this?  So, can I create two Office Web App Farms on the same server?

    Yes you can use OWA for multiple environments. You can refer to below links:
    http://social.technet.microsoft.com/Forums/en-US/78f678cb-69fc-48e6-9f4d-6985154f2d0c/office-web-apps-with-multiple-sp-farms?forum=sharepointadmin
    http://blogs.technet.com/b/office_resource_kit/archive/2012/09/11/introducing-office-web-apps-server.aspx
    Please ensure that you mark a question as Answered once you receive a satisfactory response.

  • Web Dynpro JAVA developer attempting to unravel Environmental Compliance source code seeks guidance from EC NWDS Composition Environment specialist

    It frustrates me to have to ask for detailed instruction on this issue, but normal means of importing these Java-based EC software archives just aren't working in our NWDI system. We never moved our production WDJ applications to CE (like so many others we jumped straight from NW7.01 to NW7.3) but I'm doing my part to come up to speed on composition projects. No matter how we create the NWDI track and import the software archives and dependencies, we can't get to a state where we can successfully import a project (product?) from the Development Infrastructure into the Composite Explorer perspective.
    I need to know specifically how I, an experienced WDJ developer with no exposure to CE, can take the five .SCA files that contain Environmental Compliance 3.0 and get the project into my NWDS 7.31 workspace in a workable state, on my local system, without NWDI. Actual code modifications can wait for the moment, I just need to analyze the classes and EJB's to identify where we might want to make enhancements, and where/how to use the provided Enhancement Spots. I've followed the links, downloaded the .pdf's, read the online help, stepped thru the tutorials - I've done my due dilligence. It's common knowledge that Composition Environment is no longer recommended for new development, and as the EH&S programs are converted to WDA there are even fewer resources to look to.
    These specific types of pleas don't often return results, as I have learned over my years in this community, but nothing ventured-nothing gained, and I am no longer afraid of looking dumb or being scolded. I just need help.
    As always, points will be awarded.

    I agree in part Tobias.
    Because the source code is in the SCA within the src.zip file within each SDA.
    What does not exist within that SCA Standard is the SOURCEARCHIVES folder. In that folder should be a file (the extension .dcsa) for each DC (component) that SCA Standard.
    In short, yes it is possible to perform a reverse engineering...
    Just import the SCA Standard in NWDS.
    Note the name and type of all DCs, its dependencies and its public parts... yes, it's a lot of work, as there are many interdependent DCs.
    Done so, create a new workspace in NWDS and create a new SCA in DI (custom), inside it create all DCs with the same names and types (and dependencies)... identical to SCA Standard.
    And soon after, entering the SRC folder of each of these projects (DCs) and replace its contents with the contents of src.zip file that is contained within the SDA files within the SCA Standard.
    That done, we will have an SCA Custom with all representative of the SCA Standard DCs, but editable.
    I agree it is not nearly a good practice, or even that SAP indicate that this is done... however, it is possible yes.
    Att,
    Angelo

  • Limitations of read-only JE Environments

    Hello all,
    I realized recently that our documentation on the limitations of read-only Environments is lacking. So for the next JE release I updated the javadoc for EnvironmentConfig.setReadOnly. The new text is pasted below. If you're using, or thinking of using, a read-only environment, be sure to take a look. I apologize for the lack of documentation in the current release, especially if you've used a read-only environment without understanding these limitations and this has caused problems for you.
    --mark
    setReadOnly
    public EnvironmentConfig setReadOnly(boolean readOnly)
    Configures the database environment to be read-only, and any attempt to modify a database will fail.
    A read-only environment has several limitations and is recommended only in special circumstances. Note that there is no performance advantage to opening an environment read-only.
    The primary reason for opening an environment read-only is to open a single environment in multiple JVM processes. Only one JVM process at a time may open the environment read-write. See EnvironmentLockedException.
    When the environment is open read-only, the following limitations apply.
    . In the read-only environment no writes may be performed, as expected, and databases must be opened read-only using DatabaseConfig.setReadOnly(boolean).
    . The read-only environment receives a snapshot of the data that is effectively frozen at the time the environment is opened. If the application has the environment open read-write in another JVM process and modifies the environment's databases in any way, the read-only version of the data will not be updated until the read-only JVM process closes and reopens the environment (and by extension all databases in that environment).
    . If the read-only environment is opened while the environment is in use by another JVM process in read-write mode, opening the environment read-only (recovery) is likely to take longer than it does after a clean shutdown. This is due to the fact that the read-write JVM process is writing and checkpoints are occurring that are not coordinated with the read-only JVM process. The effect is similar to opening an environment after a crash.
    . In a read-only environment, the JE cache will contain information that cannot be evicted because it was reconstructed by recovery and cannot be flushed to disk. This means that the read-only environment may not be suitable for operations that use large amounts of memory, and poor performance may result if this is attempted.
    . In a read-write environment, the log cleaner will be prohibited from deleting log files for as long as the environment is open read-only in another JVM process. This may cause disk usage to rise, and for this reason it is not recommended that an environment is kept open read-only in this manner for long periods.
    For these reasons, it is recommended that a read-only environment be used only for short periods and for operations that are not performance critical or memory intensive. With few exceptions, all application functions that require access to a JE environment should be built into a single application so that they can be performed in the JVM process where the environment is open read-write.
    In most applications, opening an environment read-only can and should be avoided.
    -----

    You've asked an interesting question! I'm afraid that the answer is that it's not possible in BDB JE today.
    What you'd really like for your second case is part of the type of functionality provided by MVCC (Multi-Version Concurrency Control). In case you're not familiar with that term, MVCC is the database feature which guarantees that all reads made in a transaction will see a consistent snapshot of the database and that the transaction will successfully commit only if no updates it has made conflict with any concurrent updates made since that snapshot. Unfortunately, while our sister product, BDB (C) has that feature, we do not yet support it.
    I think your alternatives are to do achieve your point in time snapshot from the file copy based backup, from the second process, or from an application level mechanism that locks out other updates.
    Regards,
    Linda

  • Query performance in two environments

    Hi all,
    I have developed simple select queries on a multiprovider and I am facing issues with query performance in quality box. A query runs pretty fast in in dev and return results while the same one dumps in Quality environment giving a time out error. This sounds more strange because our dev box has comparitively more records than the quality environment right now.
    On anlyzing the query path in both environments, we noticed that the query does an index scan in dev but not in Quality environment, especially when the selection is such that the query is supposed to return lot of records. Since the query does a sequential scan in quality, it dumps. Is there any setting that I need to make seprately in the quality environment.
    Any tips on query optimization would be great help. Thanks
    Regards
    Niranjana

    Execute some of the RSRT tests in the QA for the query using "Execute+Debug" option and use some test for Multiprovider and Databases checks in it ,try to compare with Dev as well.
    Hope it Helps
    Chetan
    @CP..

  • Deploying iTunes in lab environments campuswide

    We are creating an iTunes U page for our institution (small college), but my question involves the deployment of iTunes across our campus. What is the best, most streamlined method for deploying and using iTunes over a campus network that has migrating user profiles?
    We have several lab environments in which students would access iTunes, and we want to minimize the issues involving storage limits with migrating profiles, when any particular user will login to access iTunes from any particular workstation. Specifically, what would be the profile ramifications from accessing the user agreement to syncing their devices? Furthermore, we are mostly a Windows-based campus, so we're looking for any additional caveats that might occur.
    Any thoughts, suggestions or references would be greatly appreciated.

    We are creating an iTunes U page for our institution (small college), but my question involves the deployment of iTunes across our campus. What is the best, most streamlined method for deploying and using iTunes over a campus network that has migrating user profiles?
    We have several lab environments in which students would access iTunes, and we want to minimize the issues involving storage limits with migrating profiles, when any particular user will login to access iTunes from any particular workstation. Specifically, what would be the profile ramifications from accessing the user agreement to syncing their devices? Furthermore, we are mostly a Windows-based campus, so we're looking for any additional caveats that might occur.
    Any thoughts, suggestions or references would be greatly appreciated.

  • Concept of Dev, Test, Prodn Environments

    Hi All,
    I am new to Oracle and my question may sound silly, please bear with me .....
    and have never worked before in a clients place
    I am trying to understand the concept of environments like dev, prodn etc.....
    A server has Oracle 10g installed in a server machine with LINUX OS. The DBA has created like 5 permanent tablespaces and 2 temporary tablespaces.
    He has also created 15 users who access the database with their own schema space and privileges. Each user creates database objects which belong to their own schema but can belong to any tablespace linked to the user.
    Say for example a user/schema is accessed by 5 ppl working on an application. They create tables and other database objects etc...which can be accessed by person loggin through that specific user/schema. They created an application and now want to test it.
    Technically they say push it to the test environment or even say prodn evrionment. I dont understand how does the concept of different environment fit into this picture and what does pushing mean?
    Is the the test environment another server running the oracle database or is it something else or a combination of things.
    Your help will be truly appreciated. Thank you

    Your question is not particularly an Oracle related question. If there is a "concepts" guide to follow, search for the software development life cycle (for absolute starters). Different sites have different environments, for example, sandpit, development, testing, acceptance testing, training, pre-production, production etc. Some sites run all environments on one server (not the best way) or each on seperate. Some mix development, testing etc in the same database instance and have production and training on a production server. They all loosely follow something like development->testing->production with some of the other environments in between. Best advice for you right now, don't screw up in production followed by don't screw up anywhere else otherwise you might have some angry developers, managers and your organisation paying them money to twiddle their thumbs while the project slips.

  • Coldfusion 11 SSL Certs applied - The APR based Apache Tomcat library which allows optimal performance in production environments,

    Coldfusion 11
    Windows Server 2012 R2
    Both the Coldfusion admin and additonal site work fine on HTTP.
    As soon as I attempt to enable SSL websockets and install SSL certs, the Coldfusion 11 Application service will not start. I followed the steps below....
    Coldfusion 11 - Web Sockets via SSL
    The Coldfusion-error.log shows
    Jan 26, 2015 3:21:23 PM org.apache.catalina.core.AprLifecycleListener init
    INFO: The APR based Apache Tomcat Native library which allows optimal performance in production environments was not found on the java.library.path
    Server was a cloned VM of the test server with developer copy of CF11, but license has been purchased and applied. SSL certs have been imported successfully, paths are correct in CF Admin to the cert file etc.
    Do I need to install another version of Coldfusion to get around this issue or is there a download update I need to apply?
    If i reconfig the \cfusion\runtime\conf\server.xml to comment out the SSL sections it works fine.
    Any assistance welcome - I can't allow this site to made publicly available with using SSL.
    SM

    @Scott, first are you running update 3? If so, let’s clarify at the outside that, as that bug report (you point to) does indicate in the notes below it, there is a fix for a problem where this feature broke in that release.  And as it notes, you can email [email protected] to request the fix (referring to that bug), or you can wait for it to be released publicly as part of a larger set of fixes.
    If you are NOT on update 3, or you may apply the fix and find things still don’t work, I would wonder about a few things, from what you’ve described.
    First, you say that the CF service won’t start, and you offer some lines from the ColdFusion-error log. Just to be clear, those particular error messages are common and nothing to worry about. They definitely do NOT reflect any reason CF doesn’t start. But are you confirming that that time (in the log lines) is in fact the time that you had started CF, when it would not start? I’d suspect not.
    Look instead in the coldfusin-out.log. What does THAT log show at the time you try to start CF and it won’t start? You may find something else there. (And since you refer to editing the server.xml file, you may the log complains that because of an error in the XML it can’t “parse” the file. It’s worth checking.
    You say also that you have confirmed that “paths are correct in CF Admin to the cert file”. What path are you referring to? There’s no page in the CF admin that points to the CACERTS file in which the certs are stored. Do you perhaps mean on the “system info” or “settings summary” page? Even so there’s still no line in there which refers to the “cert file”.
    Instead—and this could be a part of your problem—the cert file is simply found WITHIN the directory where CF’s pointed to to find its JVM. Wherever THAT is, is where you need to put any certificates. So take a look at the CF Admin, either in the ”java and jvm” page (and the value of its “Java Virtual Machine Path”), or in the “settings summary” or “system information” pages and their value for “Java Home”. Is that something like \coldfusion11\jre? Or something like \Java\jdk1.7.0_71\jre? Whichever it is, THAT’s where you need to put the certs, within there (in its \lib\security folder).
    Finally, when you say that if you “comment out the SSL sections  it works fine”, do you mean that a) CF comes up and b) some example code calling your socket works, as long as you don’t use SSL?
    To be clear, no, you don’t need any other version of CF11 to get websockets to work. But if you are on update 3, that may be the simple problem. Let us know how it goes for you with this info.
    /charlie

  • Sender SOAP Adapter - how to avoid changes of URL for diferent environments

    Dear experts,
    we have a concern with transports of PI objects in our environment.
    Situation:
    When we transport SOAP Sender objetcs in PI from Dev to Test the URL (Endpoint) changes automatically the hostname and the business system (target system in SLD). For example:
    In Development to call the Web Service (Sender SOAP Adapater):
    http://sappid1.evonceib.local:50100/XISOAPAdapter/MessageServlet?senderParty=&senderService=EvonceOSB_PEMP_D&receiverParty=&receiverService=&interface=PEMP_PagadoresMaintain_Async_Out_WS&interfaceNamespace=urn%3Aonce%3Asppemp%3Amanagedatosdepagadorpa_in_pemp
    In Test to call the Web Service (Sender SOAP Adapater):
    http://sappit1:50100/XISOAPAdapter/MessageServlet?senderParty=&senderService=EvonceOSB_PEMP_T&receiverParty=&receiverService=&interface=PEMP_PagadoresMaintain_Async_Out_WS&interfaceNamespace=urn%3Aonce%3Asppemp%3Amanagedatosdepagadorpa_in_pemp
    Problem:
    The consuming side only wants to have one endpoint and don't want to touch the development when transporting the consumer. Generally to find a work around via DNS aliases for the hostname should solve the prefix of the URL, but how to handle the difference of Business Systems senderService=EvonceOSB_PEMP_D and senderService=EvonceOSB_PEMP_T in the URL?
    In the SLD I can not maintain two BS with the same name, when I do not define a transport target the import fails.
    So anybody has an idea how to get exactly the same URL for diferent environments?
    Thank you in advance,
    Best Regards,
    Karsten Blankenstein

    Hi ,
    It is recommended to use different environmemt for Dev,testing,Production. So your PI server host name and port will be different for every environment.
    So URL will differ.
    however if you dont want to change Business system Name then go for BUSINESS SERVICE/COMPONENT instead of BUSINESS SYSTEM. It will be same in every environment and also dont need to maintain anything in SLD

  • Use of Flashback Database in Data guard environments

    11.2.0.3/RHEL 5.8
    I've come across several docs which talk about configuring FLASHBACK DATABASE in dataguard environments. We have several
    Physical standby DBs (Single Instance & RAC) running in our shop.I would like to know two or three major(common) use of FLASHBACK DATABASE in data guard environments.
    I understood one use mentioned in the below URL ie. recovering from a logical mistake
    http://uhesse.com/2010/08/06/using-flashback-in-a-data-guard-environment/
    I would like to know what are the other major/common use of Flashback Database feauture in DataGuard environment

    A couple of other uses:
    1) You can use flashback to test your DR. So you can activate your standby. Test application/network connectivity and functionality on your DR site and when done revert this database back to a physical standby. You do however have to ensure that this is allowed in your environment. In some places I have worked this would be a big no no as there were zero data loss requirements. However some companies will allow this as long as the standby is back in place within a certain time period.
    2) In the case that you have to do a failover for whatever reason, but then what was the primary site becomes available, you can flashback what was your primary to make it the standby rather than re-instatiating the database from scratch.
    Eg. You have a power outage at your primary site so you perform a failover and your standby becomes the primary. Once what was your primary site is back online, you can convert your previous primary into a standby by doing a full back/restore (or whatever method you choose) to recreate your standby again. However you also have the option of using flashback on this database and then convert it into a standby as this would potentially be quicker than re-instantiating the standby.

Maybe you are looking for