Data warehouse response time

Hi all,
I have a warehouse about 1/2TB in size an its response time is very slow when reports are ran against it
The share pool stata are below
SQL Area get ratio = 95.2%
pin ratio = 99.8%
reloads/pins = .0016%
What should i be looking at

Are you certain that no query plans have changed?
am not 100% sure
What version of Oracle?
9.2.0.4.0
Which optimizer are you using?
none
initialization parameter
optimizer_index_caching=0
optimizer_index_cost_adj=100
optimizer mode=choose
When did you last gather statistics?
today
Thanx

Similar Messages

  • How long is the "OS X Mountain Lion Up-To-Date Program" response time?

    I submitted the information about my recently purchased laptop to the uptodate page http://www.apple.com/ca/osx/uptodate/ earlier in the day but haven't recieved any notifications (not even an email saying that the information has been recieved). I've heard that other people have gotten codes so I assume it shouldn't take this long. Should I just resubmit again?

    danpuzz wrote:
    Thanks. Is it odd that I haven't recieved any sort of confirmation email to indicate that the request is pending, or is one not sent out?
    Servers are slammed.  Your email is the least of the problems.
    http://www.macrumors.com/2012/07/25/apples-os-x-mountain-lion-up-to-date-program -experiencing-early-hiccups/

  • Is the OBIEE used to create a data warehouses dynamically?

    Management where I work wants to use the OBIEE Administrator to source a 3NF normalized database and create a "virtual data warehouse" in the Business Modeling Mapping layer of OBI Administrator as a Star Schema model is required by OBI Business Model layer. They claim they were told by an Oracle sales rep that the Administrator tool could do this.
    Is this possible? As OBI issues only SQL and not PL/SQL how can one "create" dimensions, lookup tables and fact tables dynamically? And even if it could the performance hit to recreate the virtual data warehouse each time a query is issued would be huge.
    Having used Prism Warehouse Builder and DataStage in the past to create data warehouses I am aware that one needs a procedural programming language to create and maintain the star schema tables (surrogate key maintenance, controling workflows, maintaining slowly changing dimensions, intermediate lookup tables, etc.). SQL was not meant to do this heavy lifting programming. Afterall, isn't this why Oracle Warehouse Builder and previously Informatica is shipped with OBIEE suite because OBI is not an ETL tool to create dimensional models? One uses an ETL tool to create the dimensional data model for OBI to access and pass along the metadata to OBI Answers.
    So is it normal practice to use the Administrator's Business Mapping/Modeling layer for creating a virtual star schema logical tables from physical tables that are 3NF? Or is the tool used to access already denormalized tables in the physical layer that were created using Informatica or OWB or other ETL tool?

    I asked an "Expert" in OBIEE. Here are snippets of his response:
    "Be aware though that the transformation ability is fairly limited, and
    will only really work with data that is very close to a star schema, i.e.
    the data can be easily transformed through a couple of denormalizations and
    table joins. If your source data is very normalized and cannot easily be
    transformed into a star schema, you would need to use a tools such as
    Informatica, OWB or similar to extract data from your source systems, load
    and then transform it into a data warehouse or data mart and report of of
    that. The more that your data needs to be transformed (i.e. the closer it
    is to a 3NF model) the more likely it is that you'll need to use an ETL
    tool, and a data warehouse or data mart, to host your data."
    And in response to my noting the lack of documentation on how to model a 3NF to Star Schema his response was:
    "No, you're right, the documentation doesn't really go into "how to" turn a 3NF model into a dimensional model. If you look back to when OBIEE was a Siebel product, the documentation was really aimed at either Siebel consultants or customers who had been on the training, they didn't want customers "off the street" to try and implement OBIEE as it would hit their services revenue. That's where the blog posts we do, things like the Oracle-by-example training courses on OTN and so on come in, otherwise as you say there's little out there on the best way to transform your model - it's mostly passed on "word of mouth" or is built up from experience on working on projects."

  • Coherence and EclipseLink - JTA Transaction Manager - slow response times

    A colleague and I are updating a transactional web service to use Coherence as an underlying L2 cache. The application has the following characteristics:
    Java 1.7
    Using Spring Framework 4.0.5
    EclipseLink 12.1.2
    TopLink grid 12.1.2
    Coherence 12.1.2
    javax.persistence 12.1.2
    The application is split, with a GAR in a WebLogic environment and the actual web service application deployed into IBM WebSphere 8.5.
    When we execute a GET from the server for a decently sized piece of data, the response time is roughly 20-25 seconds. From looking into DynaTrace, it appears that we're hitting a brick wall at the "calculateChanges" method within EclipseLink. Looking further, we appear to be having issues with the transaction manager but we're not sure what. If we have a local resource transaction manager, the response time is roughly 500 milliseconds for the exact same request. When the JTA transaction manager is involved, it's 20-25 seconds.
    Is there a recommendation on how to configure the transaction manager when incorporating Coherence into a web service application of this type?

    Hi Volker/Markus,
    Thanks a lot for the response.
    Yeah Volker, you are absolutely right. the 10-12 seconds happens when we have not used the transaction for several minutes...Looks like the transactions are moved away from the SAP buffer or something, in a very short time.
    and yes, the ABAP WP's are running in Pool 2 (*BASE) and the the JAVA server, I have set up in another memory pool of 7 GB's.
    I would say the performance of the JAVA part is much better than the ABAP part.
    Should I just remove the ABAP part of the SOLMAN from memory pool 2 and assign the JAVA/ABAP a separate huge memory pool  of say like 12-13 GB's.
    Will that likely to improve my performance??
    No, I have not deactivated RSDB_TDB in TCOLL from daily twice to weekly once on all systems on this box. It is running daily twice right now.
    Should I change it to weekly once on all the systems on this box?  How is that going to help me?? The only thinng I can think of is that it will save me some CPU utilization, as considerable CPU resources are needed for this program to run.
    But my CPU utilization is anyway only like 30 % average. Its a i570 hardware and right now running 5 CPU's.
    So you still think I should deactivate this job from daily twice to weekly once on all systems on this box??
    Markus, Did you open up any messages with SAP on this issue.?
    I remember working on the 3.2 version of soultion manager on change management and the response times very much better than this as compared to 4.0.
    Let me know guys and once again..thanks a lot for your help and valuable input.
    Abhi

  • Data warehouse, time dimension

    Dear folks,
    My boss asked me an interesting question, but I couldn't answer, because I'm not an Oracle expert, I'd rather describe myself as Cognos developoer.
    We have a ROLAP data warehouse, and there is a time dimension table, the primary key column has type number. He asked me, why is it worth having a number primary key, and not having the date itself as primary key, because Oracle converts dates to numbers too... So what do you think, what's the difference, wich is the better option?
    Thanks you in advance, and sorry for my bad English.

    Hi,
    I can do it, but this would be usefull only to create "time dimension" table. But also the "sales" fact table needs to be altered (thus, the "time" column will not contain the value of the time, but the ID of the corresponding time in the "time dimension" table).
    I know that on DW this procedure is done automatically by the ETL process.
    My question is that does the repository has any tools similar to this?
    Thank you.

  • Service manager console can't connect to Service manager data warehouse SQL reporting services

    When I start Service manager console, it gives this kind of error:
    The Service Manager data warehouse SQL Reporting Services server is currently unavailable. You will be unable to execute reports until this server is available. Please contact your system administrator. After the server becomes available please close your
    console and re-open to view reports.
    Also in EventViewer says:
    cannot connect to SQL Reporting Services Server. Message= An unexpected error occured while connecting to SQL Reporting Services server: System.Net.WebException: The request failed with HTTP status 401: Unauthorized.
    at System.Web.Services.Protocols.SoapHttpClientProtocol.ReadResponse(SoapClientMessage message, WebResponse response, Stream responseStream, Boolean asyncCall)
    at System.Web.Services.Protocols.SoapHttpClientProtocol.Invoke(String methodName, Object[] parameters)
    at Microsoft.EnterpriseManagement.Reporting.ReportingService.ReportingService2005.FindItems(String Folder, BooleanOperatorEnum BooleanOperator, SearchCondition[] Conditions)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String searchPath, IList`1 criteria, Boolean And)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItems(String itemPath)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.FindItem(String itemPath, ItemTypeEnum[] desiredTypes)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReporting.GetFolder(String path)
    at Microsoft.EnterpriseManagement.Reporting.EnterpriseReportingGroup.Initialize()
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath, NetworkCredential credentials)
    at Microsoft.EnterpriseManagement.Reporting.ServiceManagerReportingGroup..ctor(DataWarehouseManagementGroup managementGroup, String reportingServerURL, String reportsFolderPath)
    at Microsoft.EnterpriseManagement.UI.SdkDataAccess.ManagementGroupServerSession.TryConnectToReportingManagementGroup() Remediation = Please contact your Administrator.
    We have a four server set-up where SCSM, SCDW, and sqls for both are on different servers. Also I have red that this could be a SPN problem, but this has  been worked on last week without the SPNs.

    On the computer you get the "SQL Reporting Services server is currently unavailable" message please open the Internet Explorer and try to connect to the URL <a href="http:///reports">http://<NameOfReportingServer>/reports
    This should open the reporting website in IE. If this isn't working you should check the proxy settings in IE. If the URL doesn't work in IE it won't work in the SCSM console as well (and vice versa).
    Andreas Baumgarten | H&D International Group
    Actually I can't access to the reporting website. It asks me credentials 3 times and then return a blank page. Also error message comes to the EventViewer System log with id 4 and source Security-Kerberos.
    The Kerberos client received a KRB_AP_ERR_MODIFIED error from the server "accountname".
    The target name used was HTTP/"reporting services fqn". This indicates that the target server failed to decrypt the ticket provided by the client.
    This can occur when the target server principal name (SPN) is registered on an account other than the account the target service is using.
    Ensure that the target SPN is only registered on the account used by the server.
    This error can also happen if the target service account password is different than what is configured on the Kerberos Key Distribution Center for that target service.
    Ensure that the service on the server and the KDC are both configured to use the same password.
    If the server name is not fully qualified, and the target domain (domain.com) is different from the client domain (domain.com), check if there are identically named server accounts in these two domains,
    or use the fully-qualified name to identify the server.
    I can access the website directly from the server which hosts Reporting Services.
    Also I query "setspn -Q HTTP/"reporting services fqn" whit result NO SUCH SPN FOUND.

  • Data Warehouse - Capturing a Period

    I been a bit stuck; maybe I am over thinking this scenerio; In creating a Data Warehouse Fact I am used to the idea of definitive dates (example: I sold x of product h today) in this case you would have a difinitive date for each occurance and could relate to a time dimension; My current scenerio:
    Customers have services; they may have multiple features on service and change features on the service, In case of change, old feature ended and new started; So I guess I can key on this change - My only issue I keep on circling back on, in case of inactive services for historical values; and how this would all come together. I just want to make sure I store is properly and foundation is expandable - Any feedback would be greatly appreciate

    No response; found resolution

  • Use Data Warehouse for EBS 12.1.1 Data

    Hi,
    I want to create a data warehouse for our EBS 12.1.1 system.
    I read some article about it, but i don't understand Where should i start?
    please tell me what should i do???
    i install oracle warehouse builder but i couldn't used it.
    Best Regard

    Thanks Eric for the response.
    Our requirement is, in a Production instance, not to allow a DB user (APPS/SYSTEM) to see the confidential data. For example, a user who has the APPS schema password should not be able to see the salary details. However allowing the application like forms/reports/jsp to present the actual data and not the masked data.
    I understand from your reply that the data masking feature CANNOT be used for this purpose as it is rebiulding the tables with the masked data. However we understand that the same feature can be used in a TEST/DEV environment so that we can restrict the developer who is working on the TEST/DEV instance, from accessing the confidential data.
    Any pointers to acheive our requirement will be highly helpful.
    Thanks for your time.
    Ramana.

  • Sample of cost/benefit analysis of data warehouse

    Greeting!
    I am currently doing a research on data warehouse kindly give me a smple of a cost/benefit analysis.
    Looking forward for your immedicate response.
    e-mail me at "[email protected]"
    Thank you and God Bless!
    very truly your,
    Benedict

    Good day Colin.
    You question is like trying to guess the next day stock qouats...
    There are many parameters that combined together and not all of them are constants infact...most of them are variables.
    The issues are vary from lets say:
    1.  COBOL programmers payments needed to support and maintain you current AS400 special written applications and interfacing...
    2.The number of system you want to integrate in your organization buisness scenarios..
    3.The monitoring capabilities currently available in your organization in the Spagethi architecture.
    4.Predicting the growth of your comapny and adding Buisness partners systems...
    5. Trying to estimate the re-using of XI-allready-writen-interfaces (not possible... )
    6.The duplicated data and data incosistancy in your IT enviorment before XI and after XI...
    So my friend,I belive this imaginary generic document of TCO can't be created...
    Good luck with your project
    Nimrod
    If it helps you I've recently asked an ABAP programmer
    how long it will take him to develope a simple File to IDoc scenario....he said that including design documents and testing Aprox. 8 days.
    I belive i can do the first one in 5 days and the second in 50% of that time...
    Message was edited by: Nimrod Gisis

  • What is the difference between olap and data warehouse..?

    Hi All,
    Is their any difference between olap and data warehouse..? Please let me know your knowledge about these. Thank you..
    ------------------------------------------------------------------------ Please mark it as complete, if you get the solution with this reply. TQ.

    A data warehouse is a database containing data that usually represents the business history of an organization. This historical data is used for analysis. Data in a data warehouse is organized to support analysis rather than to process real-time transactions
    as in online transaction processing systems (OLTP).
    OLAP technology enables data warehouses to be used effectively for online analysis, providing rapid responses to iterative complex analytical queries. OLAP's multidimensional data model and data aggregation techniques organize and summarize large amounts
    of data so it can be evaluated quickly using online analysis and graphical tools.
    Reference:
    http://technet.microsoft.com/en-us/library/aa197903(v=sql.80).aspx
    http://stackoverflow.com/questions/18916682/data-warehouse-vs-olap-cube
    If this post answers your query, please click "Mark As Answer" or "Vote as Helpful".

  • SAP GoLive : File System Response Times and Online Redologs design

    Hello,
    A SAP Going Live Verification session has just been performed on our SAP Production environnement.
    SAP ECC6
    Oracle 10.2.0.2
    Solaris 10
    As usual, we received database configuration instructions, but I'm a little bit skeptical about two of them :
    1/
    We have been told that our file system read response times "do not meet the standard requirements"
    The following datafile has ben considered having a too high average read time per block.
    File name -Blocks read  -  Avg. read time (ms)  -Total read time per datafile (ms)
    /oracle/PMA/sapdata5/sr3700_10/sr3700.data10          67534                         23                               1553282
    I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    2/
    We have been asked  to increase the size of the online redo logs which are already quite large (54Mb).
    Actually we have BW loading that generates "Chekpoint not comlete" message every night.
    I've read in sap note 79341 that :
    "The disadvantage of big redo log files is the lower checkpoint frequency and the longer time Oracle needs for an instance recovery."
    Frankly, I have problems undertanding this sentence.
    Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right ?
    But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    Thank you.
    Any useful help would be appreciated.

    Hello
    >> I'm surprised that an average read time of 23ms is considered a high value. What are exactly those "standard requirements" ?
    The recommended ("standard") values are published at the end of sapnote #322896.
    23 ms seems really a little bit high to me - for example we have round about 4 to 6 ms on our productive system (with SAN storage).
    >> Frequent checkpoints means more redo log file switches, means more archive redo log files generated. right?
    Correct.
    >> But how is it that frequent chekpoints should decrease the time necessary for recovery ?
    A checkpoint is occured on every logswitch (of the online redologfiles). On a checkpoint event the following 3 things are happening in an oracle database:
    Every dirty block in the buffer cache is written down to the datafiles
    The latest SCN is written (updated) into the datafile header
    The latest SCN is also written to the controlfiles
    If your redologfiles are larger ... checkpoints are not happening so often and in this case the dirty buffers are not written down to the datafiles (in the case of no free space in the buffer cache is needed). So if your instance crashes you need to apply more redologs to the datafiles to be in a consistent state (roll forward). If you have smaller redologfiles more log switches are occured and so the SCNs in the data file headers (and the corresponding data) are closer to the newest SCN -> ergo the recovery is faster.
    But this concept does not really fit the reality because of oracle implements some algorithm to reduce the workload for the DBWR in the case of a checkpoint.
    There are also several parameters (depends on the oracle version) which control that a required recovery time is kept. (for example FAST_START_MTTR_TARGET)
    Regards
    Stefan

  • After IOS 8 update, the screen is jerky and the response time is really, really slow... Using a PC to submit this!

    Is there a solution to a jerky screen and a slow response time after updating to IOS 8?

    Try some basic troubleshooting:
    (A) Try reset iPad
    Hold down the Sleep/Wake button and the Home button at the same time for at least ten seconds, until the Apple logo appears
    (B) Try reset all settings
    Settings>General>Reset>Reset All Settings
    (C) Setup as new (data will be lost)
    Settings>General>Reset>Erase all content and settings

  • SQL tune (High response time)

    Hi,
    I am writing the following function which is causing high response time. Can you please help? Please SBMS_SQLTUNE advise.
    GENERAL INFORMATION SECTION
    Tuning Task Name : BFG_TUNING1
    Tuning Task Owner : ARADMIN
    Scope : COMPREHENSIVE
    Time Limit(seconds) : 60
    Completion Status : COMPLETED
    Started at : 01/28/2013 15:48:39
    Completed at : 01/28/2013 15:49:43
    Number of SQL Restructure Findings: 7
    Number of Errors : 1
    Schema Name: ARADMIN
    SQL ID : 2d61kbs9vpvp6
    SQL Text : SELECT /*+no_merge(chg)*/ chg.CHANGE_REFERENCE,
    chg.Customer_Name, chg.Customer_ID, chg.Contract_ID,
    chg.Change_Title, chg.Change_Type, chg.Change_Description,
    chg.Risk, chg.Impact, chg.Urgency, chg.Scheduled_Start_Date,
    chg.Scheduled_End_Date, chg.Scheduled_Start_Date_Int,
    chg.Scheduled_End_Date_Int, chg.Outage_Required,
    chg.Change_Status, chg.Change_Status_IM, chg.Reason_for_change,
    chg.Customer_Visible, chg.Change_Source,
    chg.Related_Ticket_Type, chg.Related_Ticket_ID,
    chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
    chg.Element_id, chg.Element_Type, chg.Element_Name,
    chg.Search_flag, chg.remedy_id, chg.Change_Manager,
    chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
    a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
    FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
    a.bfg_con_id IS NULL AND a.bfg_cus_id = chg.customer_id AND
    NOT EXISTS (SELECT a.bfg_con_id FROM exp_cm_cusid1 a WHERE
    a.bfg_con_id IS NOT NULL AND a.bfg_cus_id = chg.customer_id
    AND a.bfg_con_id = chg.contract_id ) UNION SELECT
    /*+no_marge(chg)*/ chg.CHANGE_REFERENCE, chg.Customer_Name,
    chg.Customer_ID, chg.Contract_ID, chg.Change_Title,
    chg.Change_Type, chg.Change_Description, chg.Risk, chg.Impact,
    chg.Urgency, chg.Scheduled_Start_Date, chg.Scheduled_End_Date,
    chg.Scheduled_Start_Date_Int, chg.Scheduled_End_Date_Int,
    chg.Outage_Required, chg.Change_Status, chg.Change_Status_IM,
    chg.Reason_for_change, chg.Customer_Visible, chg.Change_Source,
    chg.Related_Ticket_Type, chg.Related_Ticket_ID,
    chg.Requested_By, chg.Requested_For, chg.Site_ID, chg.Site_Name,
    chg.Element_id, chg.Element_Type, chg.Element_Name,
    chg.Search_flag, chg.remedy_id, chg.Change_Manager,
    chg.Email_Manager, chg.Queue, a.customer as CUSTOMER_IM,
    a.contract as CONTRACT_IM, a.cid FROM exp_cm_cusid1 a, (sELECT *
    FROM EXP_BFG_CM_JOIN_V WHERE CUSTOMER_ID = 14187) chg WHERE
    a.bfg_cus_id = chg.customer_id AND a.bfg_con_id =
    chg.contract_id AND a.bfg_con_id IS NOT NULL
    FINDINGS SECTION (7 findings)
    1- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 26 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    2- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 26 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    3- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 10 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    4- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 10 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    5- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate REGEXP_LIKE ("T100"."C536871160",'^[[:digit:]]+$') used at
    line ID 6 of the execution plan contains an expression on indexed column
    "C536871160". This expression prevents the optimizer from selecting indices
    on table "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    6- Restructure SQL finding (see plan 1 in explain plans section)
    The predicate TO_NUMBER(TRIM("T100"."C536871160"))=:B1 used at line ID 6 of
    the execution plan contains an expression on indexed column "C536871160".
    This expression prevents the optimizer from selecting indices on table
    "ARADMIN"."T100".
    Recommendation
    - Rewrite the predicate into an equivalent form to take advantage of
    indices. Alternatively, create a function-based index on the expression.
    Rationale
    The optimizer is unable to use an index if the predicate is an inequality
    condition or if there is an expression or an implicit data type conversion
    on the indexed column.
    7- Restructure SQL finding (see plan 1 in explain plans section)
    An expensive "UNION" operation was found at line ID 1 of the execution plan.
    Recommendation
    - Consider using "UNION ALL" instead of "UNION", if duplicates are allowed
    or uniqueness is guaranteed.
    Rationale
    "UNION" is an expensive and blocking operation because it requires
    elimination of duplicate rows. "UNION ALL" is a cheaper alternative,
    assuming that duplicates are allowed or uniqueness is guaranteed.
    ERRORS SECTION
    - The current operation was interrupted because it timed out.
    EXPLAIN PLANS SECTION
    1- Original
    Plan hash value: 1047651452
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Inst |IN-OUT|
    | 0 | SELECT STATEMENT | | 2 | 28290 | 567 (37)| 00:00:07 | | |
    | 1 | SORT UNIQUE | | 2 | 28290 | 567 (37)| 00:00:07 | | |
    | 2 | UNION-ALL | | | | | | | |
    |* 3 | HASH JOIN RIGHT ANTI | | 1 | 14158 | 373 (5)| 00:00:05 | | |
    | 4 | VIEW | VW_SQ_1 | 1 | 26 | 179 (3)| 00:00:03 | | |
    | 5 | NESTED LOOPS | | 1 | 37 | 179 (3)| 00:00:03 | | |
    |* 6 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    |* 7 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | 9 | 1 (0)| 00:00:01 | | |
    | 8 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
    |* 9 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
    |* 10 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    | 11 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
    | 12 | UNION-ALL | | | | | | | |
    |* 13 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
    | 14 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 15 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 16 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
    |* 17 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
    | 18 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 19 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 20 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
    | 21 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 22 | TABLE ACCESS BY INDEX ROWID| T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
    |* 23 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
    | 24 | NESTED LOOPS | | 1 | 14132 | 193 (5)| 00:00:03 | | |
    |* 25 | HASH JOIN | | 1 | 14085 | 192 (5)| 00:00:03 | | |
    |* 26 | TABLE ACCESS FULL | T100 | 1 | 28 | 178 (3)| 00:00:03 | | |
    | 27 | VIEW | EXP_BFG_CM_JOIN_V | 3 | 42171 | 13 (24)| 00:00:01 | | |
    | 28 | UNION-ALL | | | | | | | |
    |* 29 | HASH JOIN | | 1 | 6389 | 5 (20)| 00:00:01 | | |
    | 30 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 31 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 410 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 32 | HASH UNIQUE | | 1 | 6052 | 6 (34)| 00:00:01 | | |
    |* 33 | HASH JOIN | | 1 | 6052 | 5 (20)| 00:00:01 | | |
    | 34 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 35 | REMOTE | PROP_CHANGE_INVENTORY_V | 1 | 73 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 36 | HASH UNIQUE | | 1 | 5979 | 3 (34)| 00:00:01 | | |
    | 37 | REMOTE | PROP_CHANGE_REQUEST_V | 1 | 5979 | 2 (0)| 00:00:01 | ARS_B~ | R->S |
    | 38 | TABLE ACCESS BY INDEX ROWID | T1451 | 1 | 47 | 1 (0)| 00:00:01 | | |
    |* 39 | INDEX RANGE SCAN | I1451_536870913_1 | 1 | | 1 (0)| 00:00:01 | | |
    Predicate Information (identified by operation id):
    3 - access("ITEM_0"="EXP_BFG_CM_JOIN_V"."CUSTOMER_ID" AND "ITEM_1"="EXP_BFG_CM_JOIN_V"."CONTRACT_ID")
    6 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    7 - access("C536870913"="C536870914")
    9 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")))
    10 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_0 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    13 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    17 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    23 - access("C536870913"="C536870914")
    25 - access("EXP_BFG_CM_JOIN_V"."CUSTOMER_ID"=TO_NUMBER(TRIM("C536871160")) AND
    "EXP_BFG_CM_JOIN_V"."CONTRACT_ID"=TO_NUMBER(TRIM("C536871088")))
    26 - filter("C536871050" LIKE '%FMS%' AND REGEXP_LIKE ("C536871160",'^[[:digit:]]+$') AND ("C536871088" IS NULL
    OR REGEXP_LIKE ("C536871088",'^[[:digit:]]+$')) AND TO_NUMBER(TRIM("C536871088")) IS NOT NULL AND
    TO_NUMBER(TRIM("C536871160"))=:SYS_B_1 AND "C536871160" IS NOT NULL AND "C536871050" IS NOT NULL AND "C7"=0)
    29 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    33 - access("CHG"."PRP_CHG_REFERENCE"="INV"."PRP_CHG_REFERENCE")
    39 - access("C536870913"="C536870914")
    Remote SQL Information (identified by operation id):
    14 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    15 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
    "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
    18 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    19 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
    (accessing 'ARS_BFG_DBLINK.WORLD' )
    21 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    30 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    31 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME","ELEMENT_SUMMARY","PRODUCT_NAME" FROM
    "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV" (accessing 'ARS_BFG_DBLINK.WORLD' )
    34 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    35 - SELECT "PRP_CHG_REFERENCE","SIT_ID","SIT_NAME" FROM "PROP_OWNER2"."PROP_CHANGE_INVENTORY_V" "INV"
    (accessing 'ARS_BFG_DBLINK.WORLD' )
    37 - SELECT "PRP_CHG_REFERENCE","CUS_ID","CUS_NAME","CNT_BFG_ID","PRP_TITLE","PRP_CHG_TYPE","PRP_DESCRIPTION","PR
    P_BTIGNITE_PRIORITY","PRP_CUSTOMER_PRIORITY","PRP_CHG_URGENCY","PRP_RESPONSE_REQUIRED_BY","PRP_REQUIRED_BY_DATE","P
    RP_CHG_OUTAGE_FLAG","PRP_CHG_STATUS","PRP_CHG_FOR_REASON","PRP_CHG_CUSTOMER_VISIBILITY","PRP_CHG_SOURCE_SYSTEM","PR
    P_RELATED_TICKET_TYPE","PRP_RELATED_TICKET_ID","CHANGE_INITIATOR","CHANGE_ORIGINATOR","CHANGE_MANAGER","QUEUE"
    FROM "PROP_OWNER2"."PROP_CHANGE_REQUEST_V" "CHG" WHERE "CUS_ID"=:1 (accessing 'ARS_BFG_DBLINK.WORLD' )
    -------------------------------------------------------------------------------

    Please review the following threads:
    {message:id=9360002}
    {message:id=9360003}

  • Will the problems I am having with slow Safari response times continue if I port the image of my 4 to an new iphone 5?

    As in many families, I am ready to upgrade my iphone 4 to a 5 and pass on my 4 to my partner (who has my old 3), BUT I am resistant to do so because I have been having problems with Safari, (very slow response time) ever since I upgraded to the new OS a month or so back. I have completed all the cleaning tips (cookies, individual app shut downs) and hard boot steps with no change. I can sit my partners Iphone 3 next to my 4 and when using Safari it beats it hands down, almost doubling the time it takes on the 3. My 4 is quicker on all the other apps so I don't believe it is a hardware issue. My tests were done using our home wi fi set up the same on both phones after running all the cleaning tips on both. 
    I actually broke down and called the support line and the first level support told me if I upgraded to the 5 I would likely carry the problem to the new phone. She said she suspected malware or a virus and that for $300 she could forward me to second level support that could scan my phone and see what the problem is on my 4 and support any issues on a new iphone 5 OR for a one time lower fee $79, just help me fix my 4.
    Thinking through her options, I chose none of the above. Seems like I should be able to either 1) isolate the issue myself with the help of this forum or 2) upgrade to the iphone 5 and if the problem follows to the new one, get the support via the new device warranty. If the support person is correct and the problem is in the software and would follow the device, then in theroy I could wipe and restore my old 4 with the back up/data of my partners 3 and the problem on the 4 should be resolved.
    I really don't want to carry the problem to a new phone if I can fix the issue first. I have airwatch software to push my work email to my phone and that is usually a pain to do when I upgrade anyhow. I don't need to pay for a new phone that has the same problem as my old one. I will get grief from my partner for passing on an issue that will come up with the daily use of the old iphone 4 that I will be responsible to fix anyhow. I have invested too much time already to generate problems on both our phones. Thanks in advance for any tips or guideance about a streamlined way forward. 

    Basic troubleshooting from the User's Guide is reset, restart, restore (first from backup then as new).  Has any of this been tried?
    FYI, there are no viruses that affect iOS unless the device has been hacked or jailbroken, in which case they cannot be discussed here.

  • Error while creating data warehouse tables.

    Hi,
    I am getting an error while creating data warehouse tables.
    I am using OBIA 7.9.5.
    The contents of the generate_clt log are as below.
    >>>>>>>>>>>>>>>>>>>>>>>>>>
    Schema will be created from the following containers:
    Oracle 11.5.10
    Universal
    Conflict(s) between containers:
    Table Name : W_BOM_ITEM_FS
    Column Name: INTEGRATION_ID.
    The column properties that are different :[keyTypeCode]
    Success!
    <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    There are two rows in the DAC repository schema for the column and the table.
    The w_etl_table_col.KEY_TYPE_CD value for DW application is UNKNOWN and for the ORA_11i application it is NULL.
    Could this be the cause of the issue? If yes, why could the values be different and how to resolve this?
    If not, then what could be the problem?
    Any responses will be appreciated.
    Thanks and regards,
    Manoj.

    Strange. The OBIA 7.9.5 Installation and Configuration Guide says the following:
    4.3.4.3 Create ODBC Database Connections
    Note: You must use the Oracle Merant ODBC driver to create the ODBC connections. The Oracle Merant ODBC driver is installed by the Oracle Business Intelligence Applications installer. Therefore, you will need to create the ODBC connections after you have run the Oracle Business Intelligence Applications installer and have installed the DAC Client.
    Several other users are getting the same message creating DW tables.

Maybe you are looking for