OWB Performance Issue
Hi
I have a performance issue with OWB.
OWB 9.0.4.8.21
Oracle 9.2.0.1.0
I designed some mappings with business rules/ transformations on windows system (AMD athalon,
1*1.8 Ghz CPU, 1 GB ram) . When I run these mappings, my CPU usage goes to 100 % while my ram usage stays at 70 %.
My mapping loads data from a flatfile to tables using External tables. As my cpu usage is 100% the upload time statistics are not reliable. So i have transferred the .mdl from windows to a higher end unix machine (HP-UX 11.00, 6*450 Mhz CPU, 6 Gb Ram, OWB 9.2(Unix),
Oracle 9.2.0.1.0).
Logically my mappings should run faster on Unix but it is taking 3 times more time than the windows system. Here also my CPU usage goes to 100 % while ram usage stays at
30 %.
One more observation on Unix machine is that the Oracle process which runs my mapping uses only 1 CPU while other CPUs are not utilized.
Is there a way where in i can fork/thread my oracle process which runs the mapping to use all the CPUs instead of only 1 CPU.
Do i need to do some changes in my mapping configuration /properties or in Oracle (init.ora) to
improve the performance of my upload.
Thanks in Advance.
Manoj
Manoj,
If you use Process Flow that includes several mappings then use of FORK activity insures parallel execution of multiple mappings. See more on this in OWB 9.2 User Guide, page 10-24 "FORK".
If you wish to have a single mapping execute on multiple CPUs in parallel, then it all depends on the nature of the mapping and corresponding configuration options set:
- You mentioned using External Tables. External tables have configuration options "Parallel Access Mode" and "Parallel Access Drivers" that control parallelism. See more on this in OWB 9.2 User Guide, page 5-18 "Parallel".
Other cases:
- If the mapping is run in Row based mode with Parallel Row Code option set to 'True'. This takes advantage or Database's table function feature. See more on this in OWB 9.2 User Guide, page 11-7 "Parallel Row Code".
- If the mapping inserts into multiple tables using Splitter operator with Optimized Code option set to 'True'. This takes advantage or Database's multi-table insert feature. See more on this in OWB 9.2 User Guide, page 11-8 "Optimized Code".
The fact that a more powerful Unix server takes much longer to execute the same mappings suggests a problem with database configuration. The parallelism is usually most optimally achieved with PARALLEL_AUTOMATIC_TUNING set to 'true'. Beyond that, many manual options of configuring the database for parallelism are available. See "Oracle9i Data Warehousing Guide", Chapter 21 "Using Parallel Execution".
Nikolai
Similar Messages
-
OWB Performance issue (mapping execution always takes min 60 sec)
Hi All,
any owb mapping we execute in one of our environment , it seems the execution hangs for some time before it build the Attempting to create native operator 'class.RuntimePlatform.0.NativeExecution.PLSQL' statement. The log file shows a constant difference of 30 sec. before executing the <map>.main() function. the data extraction is very low some thing like 10-1000 records
Action taken : increase the SGA pool size to allow more resource. at the DB level
changed the -Xms64M -Xmx256M to -Xms335M -Xmx440M.
But no help
Extraction from the owb log is as follows.
2006/03/15-09:29:09-WST [1E0BF3BF] Initializing execution for auditId= 28339 parentAuditId= null topLevelAuditId=28339 taskName=XXIF_OUT_CSV_TRANS
2006/03/15-09:29:09-WST [1E0BF3BF] Attempting to create adapter 'class.RuntimePlatform.0.NativeExecution'
2006/03/15-09:29:09-WST [1E0BF3BF] Attempting to create native operator 'class.RuntimePlatform.0.NativeExecution.PLSQL'
2006/03/15-09:29:39-WST [1E0D73BF] PLSQL callspec: declare l_env wb_rt_mapaudit.wb_rt_name_values; l_IN_BATCH_ID null........
Kindly note the difference of 30 sec between create native operator to actual execution of PLSQL code.
The same set of mapping is working fine(5-15secs) in our Dev env. but is taking additional time (kind of 1-3 mins ) in Test env. and the execution of mapping does not go in parallel mode(Is this an expected behaviour ?) and if we have 10 seperate excution of the mapping , it takes 30mins to complete in TEST env. compare to 3mins in DEV env.
The noticible difference between these two env. is
These mapping been created using 10.1.0.2.0 client and deployed on 10.1.0.1.0 repository. in DEV env.
but the TEST Env. uses 10.1.0.4.0 repository.
When check the audit browser .. Can see the total elapse time is 61sec but the actual mapping exec time is only 1 sec.
Is there any configuration settings which cause this delay.
Any pointers on this will be of great help.
Regards,
njainI am having exactly the same issue as you have described here. i.e. my mappings are taking some time to initialize before they run. Did you or anyone find a solution to this problem?
-
Some Thoughts On An OWB Performance/Testing Framework
Hi all,
I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
Observations On The Current State of OWB Performance Tuning & Testing
At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
Some initial thoughts on how this could be accomplished
- Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
- Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
- Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
- Put in place a way of tracing a collection of mappings, i.e. a process flow
- The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
- Perhaps store trace results in a repository? reporting? exception reporting?
at an instance level, come up with some stock recommendations for instance settings
- identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
- put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
- Incorporate any existing "performance best practices" for OWB development
- define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
other ideas around testing?
Suggested Approach
- For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
- For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
- Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
- get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
- identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
- Investigate what additional tuning options and advisers are available with 10g
- Investigate the effect of system statistics & come up with recommendations.
Further reading / resources:
- Diagnosing Performance Problems Using Extended Trace" Cary Millsap
http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
- "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
- "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
- "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
- "Why Isn't Oracle Using My Index?!" Jonathan Lewis
http://www.dbazine.com/jlewis12.shtml
- "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
- Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
http://www.hotsos.com/downloads/registered/00000029.pdf
- Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
http://otn.oracle.com/pub/articles/schumacher_10gwait.html
- Article referencing an OWB forum posting
http://www.rittman.net/archives/001031.html
- How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
- What is the fastest way to load data from files? - OWB exchange tip
http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
- Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
- OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
- Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
- Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
http://utplsql.sourceforge.net/
Relevant postings from the OTN OWB Forum
- Bulk Insert - Configuration Settings in OWB
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
- Default Performance Parameters
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
- Performance Improvements
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
- Map Operator performance
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
- Performance of mapping with FILTER
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
- Poor mapping performance
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
- Optimizing Mapping Performance With OWB
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
- Performance of mapping with FILTER
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
- Performance of the OWB-Repository
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
- One large JOIN or many small ones?
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
- NATIVE PL SQL with OWB9i
http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
Next Steps
Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.Hi Mark,
interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
All suggestions on how to do that grafting gratefully received!
To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
Cheers,
Donna
http://www.donnapkelly.pwp.blueyonder.co.uk -
Process flow/map performance issues
We have some issues with our OWB-based application and we're looking to find out if there are different ways we could be using the tool, or features/options we've missed.
We are trying to maintain a near real time feed of data from a front end system into our warehouse which was built using OWB 10.2.0.3 over a 10.2.0.4 database. The bulk of the application consists of OWB maps with a few hand-written PL/SQL objects, all executed in a series of hierachical OWB process flows. Maps/transformations are executed either sequentially or in parallel where the referential integrity of the model allows.
The problem is that we have around 150 tables in the datamart which could potentially require updating on each refresh cycle, although in practice only a few tables have any activity on a typical refresh cycle. The cycle consists of loading data into a set of staging tables, and from there the data is transformed into the main schema, often with multiple maps per target table.
On every cycle we run hundreds of maps, the vast majority of which process zero rows. Each map runs quickly and efficiently in its own right but collectively they add up to a 5 - 10 min cycle even if there is no data to process.
There are 2 avenues which we'd like to explore and would be grateful if anyone could provide any pointers/suggestions :-
1) It appears that each map opens and closes its own database session when it executes. I presume this was done because a single process flow could be constructed with maps executing in different target schemas, but we know that's not the case for us. We'd like to know if there is anyway to configure the database connection at a higher level (eg. process flow) so it opens a connection once and executes each of the maps (database packages) in that one session.
Our DBAs are experimenting with 'shared server' settings at a database level which may help to some degree but won't be the whole story.
2) Another option is simply to run less maps eg. load the staging area as now, collate stats on which staging tables contain new data, and then apply some logic such that subsequent maps only execute if the relevant staging table(s) contain(s) some new data, otherwise bypass that map.
We tried experimenting with the 'Pre Mapping Process' operator, but essentially that just generates another function call from the map package, so we still have the overhead of opening a database session for each map to run the package. Minimal gain.
We thought about adding a function call in the process flow before each map and then branching to either execute/bypass the map as approriate, but the function call still requires opening/closing of a database session each time so, once again, minimal gain.
What we really want is some way for a map or process flow to check without logging onto the database repeatedly.
Any ideas on the above, or other potential solutions anyone could suggest, would be greatly appreciated.Hi,
Please see if these documents help.
Note: 554635.1 - Create Accounting Process Performs Poorly When 100K + Distributions are Passed for an Event
Note: 954273.1 - Multiple Create Accounting Requests Result In Poor Performance For Online Accruals
Note: 763500.1 - R12: Performance Issue with Create Accounting
Note: 733637.1 - R12:Performance Issue When Running Accounting Program Xlaaccup
Note: 781311.1 - Create Accounting Process Taking A Long Time To Complete After Appying Critical Patches
Note: 557869.1 - EBS: R12 Oracle Financials Critical Patches
Regards,
Hussein -
----Constraints and Performance issues----
Hi all,
I have a major concern and I would like ur suggestion on the best way to handle it.
I have a staging table cust_staging. I have 2 target tables customer, customer_address which must be populated from this staging table.
The customer key in all 3 tables is the primary key. For table customer_address, the customer_key is also referenced from the customer table.
Incremental data will be available in the staging table (aorund 0.2 million) and the customer table would have appx 2 million records.
The concern I have is that i have to insert/update this information into the target tables without disabling the foreign key constraints.
I tried to insert into both the target tables with the constraints enabled but the mapping just hangs and I am forced to kill the process. I had tried using a single mapping to populate both tables and when that was going into hang mode, i tried with first customer and then another mapping for customer_address. This also just goes on hang mode.
Next I tried to disable the constarint and enable it again in the mapping itself. My concern here is that if I do a blind insert and when I re-enable the constraints, if there is a violation, the target table may goto an unusable state and my target table will become non usable.
My concern here is how to tackle this problem. Can i first disable the constraints, incorporate some logic using the pre-mappings wherein I can apply business rules to check the constraints explicitly and then redirect the bad records to reject and other records to the actual target.
Please do let me know how I should handle this situation in OWB bearing in mind the performance issues also.
we use owb 9.2.Hi all,
I have a major concern and I would like ur suggestion on the best way to handle it.
I have a staging table cust_staging. I have 2 target tables customer, customer_address which must be populated from this staging table.
The customer key in all 3 tables is the primary key. For table customer_address, the customer_key is also referenced from the customer table.
Incremental data will be available in the staging table (aorund 0.2 million) and the customer table would have appx 2 million records.
The concern I have is that i have to insert/update this information into the target tables without disabling the foreign key constraints.
I tried to insert into both the target tables with the constraints enabled but the mapping just hangs and I am forced to kill the process. I had tried using a single mapping to populate both tables and when that was going into hang mode, i tried with first customer and then another mapping for customer_address. This also just goes on hang mode.
Next I tried to disable the constarint and enable it again in the mapping itself. My concern here is that if I do a blind insert and when I re-enable the constraints, if there is a violation, the target table may goto an unusable state and my target table will become non usable.
My concern here is how to tackle this problem. Can i first disable the constraints, incorporate some logic using the pre-mappings wherein I can apply business rules to check the constraints explicitly and then redirect the bad records to reject and other records to the actual target.
Please do let me know how I should handle this situation in OWB bearing in mind the performance issues also.
we use owb 9.2. -
Performance issues executing process flows after upgrading db to 10G
We have installed OWF 2.6.2, and initially our database was at 9.2. Last week we updated our database to 10g, and process flow executions are taking a lot longer, from 1 minute to 15 minutes.
Any ideas anyone what could be the cause of this performance issue?
Thanks,
YanetHi,
Oracle10g database behaves differently on the statistics of tables and indexes. So check these and check wether the mappings are updating these statistics at the right moments with respect to the ETL-proces and with the right interval.
Also, check your generated sources on how statistics are gathered (dmbs_stats.gather....). Does the index that might play a vital role in Oracle9i get new statistics, or only the table? Or only the table where doubled in amount of rows by this mapping?
You can always take matter into your own hands, by letting OWB NOT generate the source for gathering statistics, and call your own procedure in a post-mapping.
Regards,
André -
Report Performance Issue - Activity
Hi gurus,
I'm developing an Activity report using Transactional database (Online real time object).
the purpose of the report is to list down all contacts related activities and activities NOT related to Contact by activity owner (user id).
In order to fullfill that requirment I've created 2 report
1) All Activities related to Contact -- Report A
pull in Acitivity ID , Activity Type, Status, Contact ID
2) All Activities not related to Contact UNION All Activities related to Contact (Base report) -- Report B
to get the list of activities not related to contact i'm using Advanced filter based on result of another request which is I think is the part that slow down the query.
<Activity ID not equal to any Activity ID in Report B>
Anyone encountered performance issue due to the advanced filter in analytic before?
any input is really appriciated
Thanks in advanced,
FinaFina,
Union is always the last option. If you can get all record in one report, do not use union.
since all records, which you are targeting, are in the activity subject area, it is not nessecery to combine reports. add a column with the following logic
if contact id is null (or = 'Unspecified') then owner name else contact name
Hopefully, this is helping. -
Report performance Issue in BI Answers
Hi All,
We have a performance issues with reports. Report is running more than 10 mins. we took query from the session log and ran it in database, at that time it took not more than 2 mins. We have verified proper indexes on the where clause columns.
Could any once suggest to improve the performance in BI answers?
Thanks in advance,I hope you dont have many case statements and complex calculations that you do in the Answers.
Next thing you need to monitor is how many rows of data that you are trying to retrieve from the query. If the volume is huge then it takes time to do the formatting on the Answers as you are going to dump huge volumes of data. Database(like teradata) returns initially like 1-2000 records if you hit show all records then even db is gonna fair amount of time if you are dumping many records
hope it helps
thanks
Prash -
BW BCS cube(0bcs_vc10 ) Report huge performance issue
Hi Masters,
I am working out for a solution for BW report developed in 0bcs_vc10 virtual cube.
Some of the querys is taking more 15 to 20 minutes to execute the report.
This is huge performance issue. We are using BW 3.5, and report devloped in bex and published thru portal. Any one faced similar problem please advise how you tackle this issue. Please give the detail analysis approach how you resolved this issue.
Current service pack we are using is
SAP_BW 350 0016 SAPKW35016
FINBASIS 300 0012 SAPK-30012INFINBASIS
BI_CONT 353 0008 SAPKIBIFP8
SEM-BW 400 0012 SAPKGS4012
Best of Luck
Chris
BW BCS cube(0bcs_vc10 ) Report huge performance issueRavi,
I already did that, it is not helping me much for the performance. Reports are taking 15 t0 20 minutes. I wanted any body in this forum have the same issue how
they resolved it.
Regards,
Chris -
This is the question we will try to answer...
What si the bottle neck (hardware) of Adobe Premiere Pro CS6
I used PPBM5 as a benchmark testing template.
All the data and log as been collected using performance counter
First of all, describe my computer...
Operating System
Microsoft Windows 8 Pro 64-bit
CPU
Intel Xeon E5 2687W @ 3.10GHz
Sandy Bridge-EP/EX 32nm Technology
RAM
Corsair Dominator Platinum 64.0 GB DDR3
Motherboard
EVGA Corporation Classified SR-X
Graphics
PNY Nvidia Quadro 6000
EVGA Nvidia GTX 680 // Yes, I created bench stats for both card
Hard Drives
16.0GB Romex RAMDISK (RAID)
556GB LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed (RAID 0)
I have other RAID installed, but not relevant for the present post...
PSU
Cosair 1000 Watts
After many days of tests, I wanna share my results with community and comment them.
CPU Introduction
I tested my cpu and pushed it at maximum speed to understand where is the limit, can I reach this limit and I've logged precisely all result in graph (See pictures 1).
Intro : I tested my E5-XEON 2687W (8 Cores Hyperthread - 16 threads) to know if programs can use the maximum of it. I used Prime 95 to get the result. // I know this seem to be ordinary, but you will understand soon...
The result : Yes, I can get 100% of my CPU with 1 program using 20 threads in parallel. The CPU gives everything it can !
Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test...
(picture 1)
Disk Introduction
I tested my disk and pushed it at maximum speed to understand where is the limit and I've logged precisely all result in graph (See pictures 2).
Intro : I tested my RAID 0 556GB (LSI MegaRAID 9260-8i SATA3 6GB/s 5 disks with Fastpath Chip Installed) to know if I can reach the maximum % disk usage (0% idle Time)
The result : As you can see in picture 2, yes, I can get the max of my drive at ~ 1.2 Gb/sec read/write steady !
Comment : I put 3 IO (cpu, disk, ram) on the graph of my computer during the test to see the impact of transfering many Go of data during ~10 sec...
(picture 2)
Now, I know my limits ! It's time to enter deeper in the subject !
PPBM5 (H.264) Result
I rendered the sequence (H.264) using Adobe Media Encoder.
The result :
My CPU is not used at 100%, the turn around 50%
My Disk is totally idle !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate seem to be a wave (up and down). Probably caused by (Encrypt time.... write.... Encrypt time.... write...) // It's ok, ~5Mb/sec during transfert rate !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 39 Go RAM free after the test ! // Excellent
~65 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
GPU Load on card seem to be a wave also ! (up and down) ~40% usage of GPU during the process of encoding.
GPU Ram get 1.2Go of RAM (But with GTX 680, no problem and Quadro 6000 with 6 GB RAM, no problem !)
Comment/Question : CPU is free (50%), disks are free (99%), GPU is free (60%), RAM is free (62%), my computer is not pushed at limit during the encoding process. Why ???? Is there some time delay in the encoding process ?
Other : Quadro 6000 & GTX 680 gives the same result !
(picture 3)
PPBM5 (Disk Test) Result (RAID LSI)
I rendered the sequence (Disk Test) using Adobe Media Encoder on my RAID 0 LSI disk.
The result :
My CPU is not used at 100%
My Disk wave and wave again, but far far from the limit !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate wave and wave again (up and down). Probably caused by (Buffering time.... write.... Buffering time.... write...) // It's ok, ~375Mb/sec peak during transfert rate ! Easy !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 40.5 Go RAM free after the test ! // Excellent
~48 thread opened by Adobe Media Encoder (Good, thread is the sign that program try to using many cores !)
GPU Load on card = 0 (This kind of encoding is GPU irrelevant)
GPU Ram get 400Mb of RAM (No usage for encoding)
Comment/Question : CPU is free (65%), disks are free (60%), GPU is free (100%), RAM is free (63%), my computer is not pushed at limit during the encoding process. Why ???? Is there some time delay in the encoding process ?
(picture 4)
PPBM5 (Disk Test) Result (Direct in RAMDrive)
I rendered the same sequence (Disk Test) using Adobe Media Encoder directly in my RamDrive
Comment/Question : Look at the transfert rate under (picture 5). It's exactly the same speed than with my RAID 0 LSI controller. Impossible ! Look in the same picture the transfert rate I can reach with the ramdrive (> 3.0 Gb/sec steady) and I don't go under 30% of disk usage. CPU is idle (70%), Disk is idle (100%), GPU is idle (100%) and RAM is free (63%). // This kind of results let me REALLY confused. It's smell bug and big problem with hardware and IO usage in CS6 !
(picture 5)
PPBM5 (MPEG-DVD) Result
I rendered the sequence (MPEG-DVD) using Adobe Media Encoder.
The result :
My CPU is not used at 100%
My Disk is totally idle !
All the process usage are idle except process of (Adobe Media Encoder)
The transfert rate wave and wave again (up and down). Probably caused by (Encoding time.... write.... Encoding time.... write...) // It's ok, ~2Mb/sec during transfert rate ! Real Joke !
CPU Power management give 100% of clock to CPU during the encoding process (it's ok, the clock is stable during process).
RAM, more than enough ! 40 Go RAM free after the test ! // Excellent
~80 thread opened by Adobe Media Encoder (Lot of thread, but it's ok in multi-thread apps!)
GPU Load on card = 100 (This use the maximum of my GPU)
GPU Ram get 1Gb of RAM
Comment/Question : CPU is free (70%), disks are free (98%), GPU is loaded (MAX), RAM is free (63%), my computer is pushed at limit during the encoding process for GPU only. Now, for this kind of encoding, the speed limit is affected by the slower IO (Video Card GPU)
Other : Quadro 6000 is slower than GTX 680 for this kind of encoding (~20 s slower than GTX).
(picture 6)
Encoding single clip FULL HD AVCHD to H.264 Result (Premiere Pro CS6)
You can look the result in the picture.
Comment/Question : CPU is free (55%), disks are free (99%), GPU is free (90%), RAM is free (65%), my computer is not pushed at limit during the encoding process. Why ???? Adobe Premiere seem to have some bug with thread management. My hardware is idle ! I understand AVCHD can be very difficult to decode, but where is the waste ? My computer want, but the software not !
(picture 7)
Render composition using 3D Raytracer in After Effects CS6
You can look the result in the picture.
Comment : GPU seems to be the bottle neck when using After Effects. CPU is free (99%), Disks are free (98%), Memory is free (60%) and it depend of the setting and type of project.
Other : Quadro 6000 & GTX 680 gives the same result in time for rendering the composition.
(picture 8)
Conclusion
There is nothing you can do (I thing) with CS6 to get better performance actually. GTX 680 is the best (Consumer grade card) and the Quadro 6000 is the best (Profressional card). Both of card give really similar result (I will probably return my GTX 680 since I not really get any better performance). I not used Tesla card with my Quadro, but actually, both, Premiere Pro & After Effects doesn't use multi GPU. I tried to used both card together (GTX & Quadro), but After Effects gives priority to the slower card (In this case, the GTX 680)
Premiere Pro, I'm speechless ! Premiere Pro is not able to get max performance of my computer. Not just 10% or 20%, but average 60%. I'm a programmor, multi-threadling apps are difficult to manage and I can understand Adobe's programmor. But actually, if anybody have comment about this post, tricks or any kind of solution, you can comment this post. It's seem to be a bug...
Thank you.Patrick,
I can't explain everything, but let me give you some background as I understand it.
The first issue is that CS6 has a far less efficient internal buffering or caching system than CS5/5.5. That is why the MPEG encoding in CS6 is roughly 2-3 times slower than the same test with CS5. There is some 'under-the-hood' processing going on that causes this significant performance loss.
The second issue is that AME does not handle regular memory and inter-process memory very well. I have described this here: Latest News
As to your test results, there are some other noteworthy things to mention. 3D Ray tracing in AE is not very good in using all CUDA cores. In fact it is lousy, it only uses very few cores and the threading is pretty bad and does not use the video card's capabilities effectively. Whether that is a driver issue with nVidia or an Adobe issue, I don't know, but whichever way you turn it, the end result is disappointing.
The overhead AME carries in our tests is something we are looking into and the next test will only use direct export and no longer the AME queue, to avoid some of the problems you saw. That entails other problems for us, since we lose the capability to check encoding logs, but a solution is in the works.
You see very low GPU usage during the H.264 test, since there are only very few accelerated parts in the timeline, in contrast to the MPEG2-DVD test, where there is rescaling going on and that is CUDA accelerated. The disk I/O test suffers from the problems mentioned above and is the reason that my own Disk I/O results are only 33 seconds with the current test, but when I extend the duration of that timeline to 3 hours, the direct export method gives me 22 seconds, although the amount of data to be written, 37,092 MB has increased threefold. An effective write speed of 1,686 MB/s.
There are a number of performance issues with CS6 that Adobe is aware of, but whether they can be solved and in what time, I haven't the faintest idea.
Just my $ 0.02 -
Performance Issue for BI system
Hello,
We are facing performance issues for BI System. Its a preproductive system and its performance is degrading badly everyday. I was checking system came to know program buffer hit ratio is increaasing everyday due to high Swaps. So asked to change the parameter abap/buffersize which was 300Mb to 500Mb. But still no major improvement is found in the system.
There is 16GB Ram available and Server is HP-UX and with Netweaver2004s with Oracle 10.2.0.4.0 installed in it.
The Main problem is while running a report or creating a query is taking way too long time.
Kindly help me.Hello SIva,
Thanks for your reply but i have checked ST02 and ST03 and also SM50 and its normal
we are having 9 dialog processes, 3 Background , 2 Update and 1 spool.
No one is using the system currently but in ST02 i can see the swaps are in red.
Buffer HitRatio % Alloc. KB Freesp. KB % Free Sp. Dir. Size FreeDirEnt % Free Dir Swaps DB Accs
Nametab (NTAB) 0
Table definition 99,60 6.798 20.000 29.532 153.221
Field definition 99,82 31.562 784 2,61 20.000 6.222 31,11 17.246 41.248
Short NTAB 99,94 3.625 2.446 81,53 5.000 2.801 56,02 0 2.254
Initial records 73,95 6.625 998 16,63 5.000 690 13,80 40.069 49.528
0
boldprogram 97,66 300.000 1.074 0,38 75.000 67.177 89,57 219.665 725.703bold
CUA 99,75 3.000 875 36,29 1.500 1.401 93,40 55.277 2.497
Screen 99,80 4.297 1.365 33,35 2.000 1.811 90,55 119 3.214
Calendar 100,00 488 361 75,52 200 42 21,00 0 158
OTR 100,00 4.096 3.313 100,00 2.000 2.000 100,00 0
0
Tables 0
Generic Key 99,17 29.297 1.450 5,23 5.000 350 7,00 2.219 3.085.633
Single record 99,43 10.000 1.907 19,41 500 344 68,80 39 467.978
0
Export/import 82,75 4.096 43 1,30 2.000 662 33,10 137.208
Exp./ Imp. SHM 89,83 4.096 438 13,22 2.000 1.482 74,10 0
SAP Memory Curr.Use % CurUse[KB] MaxUse[KB] In Mem[KB] OnDisk[KB] SAPCurCach HitRatio %
Roll area 2,22 5.832 22.856 131.072 131.072 IDs 96,61
Page area 1,08 2.832 24.144 65.536 196.608 Statement 79,00
Extended memory 22,90 958.464 1.929.216 4.186.112 0 0,00
Heap memory 0 0 1.473.767 0 0,00
Call Stati HitRatio % ABAP/4 Req ABAP Fails DBTotCalls AvTime[ms] DBRowsAff.
Select single 88,59 63.073.369 5.817.659 4.322.263 0 57.255.710
Select 72,68 284.080.387 0 13.718.442 0 32.199.124
Insert 0,00 151.955 5.458 166.159 0 323.725
Update 0,00 378.161 97.884 395.814 0 486.880
Delete 0,00 389.398 332.619 415.562 0 244.495
Edited by: Srikanth Sunkara on May 12, 2011 11:50 AM -
RE: Case 59063: performance issues w/ C TLIB and Forte3M
Hi James,
Could you give me a call, I am at my desk.
I had meetings all day and couldn't respond to your calls earlier.
-----Original Message-----
From: James Min [mailto:jminbrio.forte.com]
Sent: Thursday, March 30, 2000 2:50 PM
To: Sharma, Sandeep; Pyatetskiy, Alexander
Cc: sophiaforte.com; kenlforte.com; Tenerelli, Mike
Subject: Re: Case 59063: performance issues w/ C TLIB and Forte 3M
Hello,
I just want to reiterate that we are very committed to working on
this issue, and that our goal is to find out the root of the problem. But
first I'd like to narrow down the avenues by process of elimination.
Open Cursor is something that is commonly used in today's RDBMS. I
know that you must test your query in ISQL using some kind of execute
immediate, but Sybase should be able to handle an open cursor. I was
wondering if your Sybase expert commented on the fact that the server is
not responding to commonly used command like 'open cursor'. According to
our developer, we are merely following the API from Sybase, and open cursor
is not something that particularly slows down a query for several minutes
(except maybe the very first time). The logs show that Forte is waiting for
a status from the DB server. Actually, using prepared statements and open
cursor ends up being more efficient in the long run.
Some questions:
1) Have you tried to do a prepared statement with open cursor in your ISQL
session? If so, did it have the same slowness?
2) How big is the table you are querying? How many rows are there? How many
are returned?
3) When there is a hang in Forte, is there disk-spinning or CPU usage in
the database server side? On the Forte side? Absolutely no activity at all?
We actually have a Sybase set-up here, and if you wish, we could test out
your database and Forte PEX here. Since your queries seems to be running
off of only one table, this might be the best option, as we could look at
everything here, in house. To do this:
a) BCP out the data into a flat file. (character format to make it portable)
b) we need a script to create the table and indexes.
c) the Forte PEX file of the app to test this out.
d) the SQL staement that you issue in ISQL for comparison.
If the situation warrants, we can give a concrete example of
possible errors/bugs to a developer. Dial-in is still an option, but to be
able to look at the TOOL code, database setup, etc. without the limitations
of dial-up may be faster and more efficient. Please let me know if you can
provide this, as well as the answers to the above questions, or if you have
any questions.
Regards,
At 08:05 AM 3/30/00 -0500, Sharma, Sandeep wrote:
James, Ken:
FYI, see attached response from our Sybase expert, Dani Sasmita. She has
already tried what you suggested and results are enclosed.
++
Sandeep
-----Original Message-----
From: SASMITA, DANIAR
Sent: Wednesday, March 29, 2000 6:43 PM
To: Pyatetskiy, Alexander
Cc: Sharma, Sandeep; Tenerelli, Mike
Subject: Re: FW: Case 59063: Select using LIKE has performance
issues
w/ CTLIB and Forte 3M
We did that trick already.
When it is hanging, I can see what is doing.
It is doing OPEN CURSOR. But not clear the exact statement of the cursor
it is trying to open.
When we run the query directly to Sybase, not using Forte, it is clearly
not opening any cursor.
And running it directly to Sybase many times, the response is always
consistently fast.
It is just when the query runs from Forte to Sybase, it opens a cursor.
But again, in the Forte code, Alex is not using any cursor.
In trying to capture the query,we even tried to audit any statementcoming
to Sybase. Same thing, just open cursor. No cursor declaration anywhere.==============================================
James Min
Technical Support Engineer - Forte Tools
Sun Microsystems, Inc.
1800 Harrison St., 17th Fl.
Oakland, CA 94612
james.minsun.com
510.869.2056
==============================================
Support Hotline: 510-451-5400
CUSTOMERS open a NEW CASE with Technical Support:
http://www.forte.com/support/case_entry.html
CUSTOMERS view your cases and enter follow-up transactions:
http://www.forte.com/support/view_calls.htmlEarthlink wrote:
Contrary to my understanding, the <font face="courier">with_pipeline</font> procedure runs 6 time slower than the legacy <font face="courier">no_pipeline</font> procedure. Am I missing something? Well, we're missing a lot here.
Like:
- a database version
- how did you test
- what data do you have, how is it distributed, indexed
and so on.
If you want to find out what's going on then use a TRACE with wait events.
All nessecary steps are explained in these threads:
HOW TO: Post a SQL statement tuning request - template posting
http://oracle-randolf.blogspot.com/2009/02/basic-sql-statement-performance.html
Another nice one is RUNSTATS:
http://asktom.oracle.com/pls/asktom/ASKTOM.download_file?p_file=6551378329289980701 -
Is there a recommended limit on the number of custom sections and the cells per table so that there are no performance issues with the UI?
Thanks Kelly,
The answers would be the following:
1200 cells per custom section (NEW COUNT), and up to 30 custom sections per spec.
Assuming all will be populated, and this would apply to all final material specs in the system which could be ~25% of all material specs.
The cells will be numeric, free text, drop downs, and some calculated numeric.
Are we reaching the limits for UI performance?
Thanks -
IOS 8.1+ Performance Issue
Hello,
I encountered a serious performance bug in Adobe Air iOS application on devices running iOS 8.1 or later. Approximately in 1-2 minutes fps drops to 7 or lower without interacting with the app. This is very noticeable in the app. The app looks like frozen for about 0.5 seconds. The bug doesn't appear on every session.
Devices tested: iPad Mini iOS 8.1.1, iPhone 6 iOS 8.2. iPod Touch 4 iOS 6 is working correctly.
Air SDK versions: 15 and 17 tested.
I can track the bug using Adobe Scout. There is a noticeable spike on frame time 1.16. Framerate drops to 7.0. The App spends much time on function Runtime overhead. Sometimes the top activity is Running AS3 attached to frame or Waiting For Next Frame instead of Runtime overhead.
The bug can be reproduced on an empty application having a one bitmap on stage. Open the app and wait for two minutes and the bug should appear. If not, just close and relaunch the app.
Bugbase link: Bug#3965160 - iOS 8.1+ Performance Issue
Miska SavelaHi
Id already activated Messages and entered the 6 digit code I was presented with into my iPhone. I can receive txt messages from non iOS users on my iMac and can reply to those messages.
I just can't send a new message from scratch to a non iOS user :-s
Thanks
Baz -
Returning multiple values from a called tabular form(performance issue)
I hope someone can help with this.
I have a form that calls another form to display a multiple column tabular list of values(needs to allow for user sorting so could not use a LOV).
The user selects one or more records from the list by using check boxes. In order to detect the records selected I loop through the block looking for boxes checked off and return those records to the calling form via a PL/SQL table.
The form displaying the tabular list loads quickly(about 5000 records in the base table). However when I select one or more values from the table and return back to the calling form, it takes a while(about 3-4 minutes) to return to the called form with the selected values.
I guess it is going through the block(all 5000 records) looking for boxes checked off and that is what is causing the noticeable pause.
Is this normal given the data volumes I have or are there any other perhaps better techniques or tricks I could use to improve performance. I am using Forms6i.
Sorry for being so long-winded and thanks in advance for any help.Try writing to your PL/SQL table when the user selects (or remove when deselect) by usuing a when-checkbox-changed trigger. This will eliminate the need for you top loop through a block with 5000 records and should improve your performance.
I am not aware of any performance issues with PL/SQL tables in forms, but if you still have slow performance try using a shared record-group instead. I have used these in the past for exactly the same thing and had no performance problems.
Hope this helps,
Candace Stover
Forms Product Management
Maybe you are looking for
-
Portal Content Created is disappearing/missing the next day!! NOde Synchron
Hello SDN Portal Friends: Greetings! We have our DEV portal setup with 3 server nodes and a dispactcher. The dispatcher and two servers are running in PRoductive mode and one server in DEBUG mode. See below. *Dispatcher*:-- Productive Use: Yes Server
-
How to calculate net price based on customized formula
Hi all expert, I have a requirement in a rental process: The price of a rental is defined as daily rate (e.g. 100/day) and monthly rate (e.g. 2500/month). when I create a rental contract, I'd like SAP to calculate net price based on contract duratio
-
I am using and XML and trying to map to a class. I do not want to fit all the fields in XML to fields in java class. For example: I have fields field1, field2, field3 in XML. But my java class wants only field1 and field2 and NOT field3. How do I do
-
I downloaded firefox last week and everything was fine. I was able to use blogger and add new post to my blog. Now my blog does not show the title picture and some of the post pictures will not show up either. The page to create a new post looks funn
-
Before downloading vers 4.0 I could set my personalized My Yahoo as my homepage and did not have to log in with password each time I opened Mozilla. I have followed all the instructions for making the signed in version of My Yahoo as the homepage, bu