Warehouse Builder Causing Database Poor Performance
Since we have upgraded to 10g, OWB's Runtime process does so much I/Os on the Database that it causes it, which is also 10g, to be extremely slow. So much so, that after 2 minutes, it is pratically impossible to login. We only have one OWB developper running this at the time, does anybody know what can cause this?
Thank you
Stephanie Comeau
You are measuring time response from totally 2 different applications.
It does not mean that C++ has good performance then all application is C++
should have the same performance rating. It now all depends on the mechanism of
logic ( program structure ). If yo want to check and investigate what is really going around,
I suggest using a packect-snooping device. There is a program called 'ethreal' you
cpoild download from SunFreeware. Hope you catch the fish.
Similar Messages
-
Poor performance when dragging item within a list loaded with images - Flex 4
Hi,
I have a custom built List component that is using a TileLayout. I am using a custom itemRenderer to load images into this list from our server (in this test, 44 images rae loaded). I have enabled dragEnabled and dragMove so that I can move items around within the list. The problem comes when I start dragging an item. The dragging operation is very slow and clunky.
When I move the mouse to drag the item, the dropIndicator does not get refreshed for a few seconds and the movement feels like my PC is lagging pretty badly. I've also noticed that during this time my CPU usage is spiking up to around 25-40%. The funny part is I can scroll the list just fine without any lag at all, but the minute I drag the application starts to lag really bad. I do have some custom dragOver code that I used to override the dragOverHandler of the list control, but the problem persists even if I take that code out. I've tried with useVirtualLayout set to both true and false and neither setting made a difference.
Any ideas as to what could be causing the poor performance and/or how I can go abouts fixing it?
Thanks a lot in advance!Ahh, good call about the Performance profiler. I'm pretty new to the profiling thing with Flex (haven't used Builder Pro before
the Flex 4 beta) so please forgive me
I found some interesting things running the performance profiler but I'm not sure I understand what to make of it all. I cleared the Performance Profile data when right before I loaded the images into the list. I then moved some images around and then captured the Profiling Data (If I understand Adobe correctly, this is the correct way to capture performance information for a set of actions).
What I found is there is a [mouseEvent] item that took 3101ms with 1 "Calls" (!!!!). When I drill down into that item to see the Method Statistics, I actually see three different Callees and no callers. The sum of the time it took the Callees to execute does not even come close to adding up to the 3101 ms (about 40ms). I'm not sure what I can make of those numbers, or if they are even meaningful. Any insight into these?
The only other items that stand out to me are [pre-render] which has 863ms (Cumulative Time) / 639ms (Self Time), [enterFrameEvent] which has 746ms / 6ms (?!), and [tincan] (what the heck is tincan?) which has times of 521ms for both Cumulative and Self.
Can anyone offer some insight into these numbers and maybe make some more suggestions for me? I apologize for my ignorance on these numbers - as I said, I'm new to the whole Flex profiling thing.
Many thanks in advance!
Edit: I just did another check, this time profiling only from the start of my drag to the end of my drop, and I still see [mouseEvent] taking almost 1000ms of Cumulative Time. However, when I double click that item to see the Method Statistics, no Callers or Callees are listed. What's causing this [mouseEvent] and how come it's taking so long? -
How to perform auto update in staging database using warehouse builder ?
Hi ,
here our client requirement is?
our client want to transfer data from their production database to staging database using warehouse builder.and also what ever the update occur in production database
that must be reflected in staging database.
here we are transfering data from product-db to staging using etl(maping--insert/update operator )
it is transfering fine.but it is not automaticaly updating in staging db with new update in production database?
can any body give me the details how to achive it.
Thanks & regards,
k azamtulla khan.Hi,
firstly there are two threads for the same issue from youself(excluding this one) which is a waste of others time so kindly refrain from doing so and use one thread.
OWB: how can automatic updation perform in staging database using OWB
OWB: how to use insert/update table operator for target table
secondly, with regards to the options , here are some options:
1. Use trigger for update.
2. Use materialized view(refresh on commit)
3. Use oracle advance queing mechanism (OAQ) for queing the recent inserted/updated records.
Kindly close other threads and maintain just one thread.
Regards
Message was edited by: Rado
user647181 -
Poor performance on Discoverer after upgrade to 11g database
Hello,
We have two customers who have experienced a sharp decline in the time it takes to run their queries when they have upgraded to an 11g database.
One was a full Disco upgrade from 4 to 10 plus the database upgrade.
The other was purely a database upgrade - Discoverer version was already 10g.
They were both on 9i database.
They are both Oracle Apps - and the reports are based on a mixture of everything from standard tables to custom tables - there is no pattern (or one that we have seen) in the poorly performing reports.
I have not seen much on metalink regarding this - has anyone else come across why this would be?
It does seem to be only Discoverer - standard reports and the app are performing as expected.
Any advice welcome,
Thanks
RachaelHi Rachael
There are additional database privileges needed for running in 10g and 11g databases that weren't needed in 9i. Here's the typical privileges that I use:
accept username prompt'Enter Username: '
accept pword prompt'Enter Password: '
create user &username identified by &pword;
grant connect, resource to &username;
grant analyze any to &username;
grant create procedure, create sequence to &username;
grant create session, create table, create view to &username;
grant execute any procedure to &username;
grant global query rewrite to &username;
grant create any materialized view to &username;
grant drop any materialized view to &username;
grant alter any materialized view to &username;
grant select any table, unlimited tablespace to &username;
grant execute on sys.dbms_job to &username;
grant select on sys.v_$parameter to &username;
I appreciate that all of the above are not needed for all users and some are only needed if the user is going to be scheduling.
However, I do know that sys.v_$parameter was not needed in 9i. Can you check whether you have this assigned?
I know this might sound silly too, but you need to refresh your database statistics in order for Discoverer to work well. You might also want to try turning off the query predictor if it is still enabled.
Best wishes
Michael -
Bad performance: Xpath View causes database server almost 100% cpu
Hi there,
Since I am rather inexperienced in using the Oracle XML DB I wonder If I am doing wrong with the following:
I created a table with a xmltype column in it:
CREATE TABLE LPD_LOAD_XML
FILENAME VARCHAR2(50 BYTE) NOT NULL,
XML_CONTENT SYS.XMLTYPE,
DATE_CREATED DATE DEFAULT sysdate NOT NULL,
CREATED_BY VARCHAR2(30 BYTE) DEFAULT user NOT NULL
and I am using a xpath query to retrieve the data which is stored in the xmltype column:
select extractvalue(value(p),'/medobj@barcode') barcode
, extractvalue(value(p),'/medobj@VJ') vj
, extractvalue(value(p),'/medobj@FNF') fnf
, extractvalue(value(p),'/medobj@GLB') glb
, cast ( multiset ( select extractvalue(value(q),'/author@cap') cap
, extractvalue(value(q),'/author/name/addpre') addpre
, extractvalue(value(q),'/author/name/fname') fname
, extractvalue(value(q),'/author/name/prefix') prefix
, extractvalue(value(q),'/author/name/lname') lname
, extractvalue(value(q),'/author/name/addapp') addapp
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/author'))) q
) as authors_type_nst
) as authors_type
, cursor(select extractvalue(value(q),'/corp.author') corp_author
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/corp.author'))) q
, extractvalue(value(p),'/medobj/isbd/auti') auti
, extractvalue(value(p),'/medobj/isbd/edition') edition
, cursor(select extractvalue(value(q),'/impressum/pofpublication') pofpublication
, extractvalue(value(q),'/impressum/publisher') publisher
, extractvalue(value(q),'/impressum/pyr') pyr
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/isbd/impressum'))) q
, extractvalue(value(p),'/medobj/isbd/collation/extent') extent
, extractvalue(value(p),'/medobj/isbd/collation/illustration') illustration
, extractvalue(value(p),'/medobj/isbd/collation/size') size_
, extractvalue(value(p),'/medobj/isbd/collation/enclosure') enclosure
, cursor(select extractvalue(value(q),'/series/series.title') series_title
, extractvalue(value(q),'/series/series.nr') series_nr
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/isbd/series'))) q
, extractvalue(value(p),'/medobj/isbd/part') part
, extractvalue(value(p),'/medobj/isbd/annotation') annotation
, extractvalue(value(p),'/medobj/isbdblock') isdbblock
, extractvalue(value(p),'/medobj/isbn/t') isbn_t
, extractvalue(value(p),'/medobj/isbn/d') isbn_d
, extractvalue(value(p),'/medobj/isbn/uh') isbn_uh
, cursor(select extractvalue(value(q),'/isbn2/t') isbn2_t
, extractvalue(value(q),'/isbn2/d') isbn2_d
, extractvalue(value(q),'/isbn2/uh') isbn2_uh
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/isbn2'))) q
, cursor(select extractvalue(value(q),'/label') label
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/label'))) q
, extractvalue(value(p),'/medobj/ppn') ppn
, cursor(select extractvalue(value(q),'/note') note
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/note'))) q
, extractvalue(value(p),'/medobj/sysreq') sysreq
, extractvalue(value(p),'/medobj/mat') mat
, extractvalue(value(p),'/medobj/lang.content') lang_content
, cursor(select extractvalue(value(q),'/siso@type') siso_type
, extractvalue(value(q),'/siso@rank') siso_rank
, extractvalue(value(q),'/siso') siso
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/siso'))) q
, cursor(select extractvalue(value(q),'/subject@rank') rank
, extractvalue(value(q),'/subject') subject
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/subjectheading/subject'))) q
, cursor(select extractvalue(value(q),'/genre/genre.full') genre_full
, extractvalue(value(q),'/genre/genre.abbr') genre_abbr
, extractvalue(value(q),'/genre/genre.cd') genre_cd
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/genre'))) q
, extractvalue(value(p),'/medobj/agecat') agecat
, extractvalue(value(p),'/medobj/readinglevel') readinglevel
, extractvalue(value(p),'/medobj/avi') avi
, extractvalue(value(p),'/medobj/review') review
, extractvalue(value(p),'/medobj/reviewer') reviewer
, cursor(select extractvalue(value(q),'/cover') cover
from TABLE(xmlsequence(EXTRACT(value(p),'medobj/cover'))) q
from lpd_load_xml i
, TABLE(xmlsequence(EXTRACT(i.xml_content,'bmo/medobj'))) p
where i.filename = b_FileName
You can see that I am using both CAST MULTISET and a cursor, since I am finding out the best way. I have to use either one because the xml is stored in a hierarchical way. But now i am lost. What is most surprising to me is that the database server is really 100% dedicated in mastering this query. Is there anyway to avoid this? How can I retrieve the data in the xml file from within the database without causing a slow performance?
Greets,
William de RondeHave you created indexes on the XMLtype Column? Either way, if i am not mistaken then your XMLType column will default to XMLType CLOB storage. This will cause that your XML data will be put completly into memory regarding the inbetween resultsets. Use dmbs_xplan procedure, autotrace on (in sqlplus) or tkprof tools to find out whats happening. Did you generate statistics on your table via DBMS_GATHER_STATS package (don't use the analyze statment) ?
-
Registering Database Users as Warehouse Builder Users - Paris
On OWB 10.2.0.1 When I do this (Registering Database Users as Warehouse Builder Users - Target) I am getting ORA-01031: insufficient privileges
DEVDW error:ORA-01031: insufficient privileges
ORA-06512: at
"REP_OWNER.WB_RTI_TARGET_SCHEMA_PRIVS", line 356 ORA-06152: at line 1i too get the same error..............even i had given that privs externally through OEM the same error is repeating.we could not able to log into the registry as repoitory owner.can some one give what sort of databse privs should a repository owner and repository user should have.
-
Poor performance and high number of gets on seemingly simple insert/select
Versions & config:
Database : 10.2.0.4.0
Application : Oracle E-Business Suite 11.5.10.2
2 node RAC, IBM AIX 5.3Here's the insert / select which I'm struggling to explain why it's taking 6 seconds, and why it needs to get > 24,000 blocks:
INSERT INTO WF_ITEM_ATTRIBUTE_VALUES ( ITEM_TYPE, ITEM_KEY, NAME, TEXT_VALUE,
NUMBER_VALUE, DATE_VALUE ) SELECT :B1 , :B2 , WIA.NAME, WIA.TEXT_DEFAULT,
WIA.NUMBER_DEFAULT, WIA.DATE_DEFAULT FROM WF_ITEM_ATTRIBUTES WIA WHERE
WIA.ITEM_TYPE = :B1
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 4 0
Execute 2 3.44 6.36 2 24297 198 36
Fetch 0 0.00 0.00 0 0 0 0
total 3 3.44 6.36 2 24297 202 36
Misses in library cache during parse: 1
Misses in library cache during execute: 2Also from the tkprof output, the explain plan and waits - virtually zero waits:
Rows Execution Plan
0 INSERT STATEMENT MODE: ALL_ROWS
0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'WF_ITEM_ATTRIBUTES' (TABLE)
0 INDEX MODE: ANALYZED (RANGE SCAN) OF 'WF_ITEM_ATTRIBUTES_PK' (INDEX (UNIQUE))
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache lock 12 0.00 0.00
gc current block 2-way 14 0.00 0.00
db file sequential read 2 0.01 0.01
row cache lock 24 0.00 0.01
library cache pin 2 0.00 0.00
rdbms ipc reply 1 0.00 0.00
gc cr block 2-way 4 0.00 0.00
gc current grant busy 1 0.00 0.00
********************************************************************************The statement was executed 2 times. I know from slicing up the trc file that :
exe #1 : elapsed = 0.02s, query = 25, current = 47, rows = 11
exe #2 : elapsed = 6.34s, query = 24272, current = 151, rows = 25
If I run just the select portion of the statement, using bind values from exe #2, I get small number of gets (< 10), and < 0.1 secs elapsed.
If I make the insert into an empty, non-partitioned table, I get :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.01 0.08 0 137 53 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.01 0.08 0 137 53 25and same explain plan - using index range scan on WF_Item_Attributes_PK.
This problem is part of testing of a database upgrade and country go-live. On a 10.2.0.3 test system (non-RAC), the same insert/select - using the real WF_Item_Attributes_Value table takes :
call count cpu elapsed disk query current rows
Parse 1 0.00 0.00 0 0 0 0
Execute 1 0.00 0.10 10 27 136 25
Fetch 0 0.00 0.00 0 0 0 0
total 2 0.00 0.10 10 27 136 25So I'm struggling to understand why the performance on the 10.2.0.4 RAC system is so much worse for this query, and why it's doing so many gets. Suggestions, thoughts, ideas welcomed.
I've verified system level things - CPUs weren't/aren't max'd out, no significant paging/swapping activity, run queue not long. AWR report for the time period shows nothing unusual.
further info on the objects concerned:
query source table :
WF_Item_Attributes_PK : unique index on Item_Type, Name. Index has 144 blocks, non-partitioned
WF_Item_Attributes tbl : non-partitioned, 160 blocks
insert destination table:
WF_Item_Attribute_Values:
range partitioned on Item_Type, and hash sub-partitioned on Item_Key
both executions of the insert hit the partition with the most data : 127,691 blocks total ; 8 sub-partitions with 15,896 to 16,055 blocks per sub-partition.
WF_Item_Attribute_Values_PK : unique index on columns Item_Type, Item_Key, Name. Range/hash partitioned as per table.
Bind values:
exe #1 : Item_Type (:B1) = OEOH, Item_Key (:B2) = 1048671
exe #2 : Item_Type (:B1) = OEOL, Item_Key (:B2) = 4253168
number of rows in WF_Item_Attribute_Values for Item_Type = OEOH : 1132587
number of rows in WF_Item_Attribute_Values for Item_Type = OEOL : 18763670
The non-RAC 10.2.0.3 test system (clone of Production from last night) has higher row counts for these 2.
thanks and regards
Ivanhi Sven,
Thanks for your input.
1) I guess so, but I haven't lifted the lid to delve inside the form as to which one. I don't think it's the cause though, as I got poor performance running the insert statement with my own value (same statement, using my own bind value).
2) In every execution plan I've seen, checked, re-checked, it uses a range scan on the primary key. It is the most efficient I think, but the source table is small in any case - table 160 blocks, PK index 144 blocks. So I think it's the partitioned destination table that's the problem - but we only see this issue on the 10.2.0.4 pre-production (RAC) system. The 10.2.0.3 (RAC) Production system doesn't have it. This is why it's so puzzling to me - the source table read is fast, and does few gets.
3) table storage details below - the Item_Types being used were 'OEOH' (fast execution) and 'OEOL' (slow execution). Both hit partition WF_ITEM49, hence I've only expanded the subpartition info for that one (there are over 600 sub-partitions).
============= From DBA_Part_Tables : Partition Type / Count =============
PARTITI SUBPART PARTITION_COUNT DEF_TABLESPACE_NAME
RANGE HASH 77 APPS_TS_TX_DATA
1 row selected.
============= From DBA_Tab_Partitions : Partition Names / Tablespaces =============
Partition Name TS Name High Value High Val Len
WF_ITEM1 APPS_TS_TX_DATA 'A1' 4
WF_ITEM2 APPS_TS_TX_DATA 'AM' 4
WF_ITEM3 APPS_TS_TX_DATA 'AP' 4
WF_ITEM47 APPS_TS_TX_DATA 'OB' 4
WF_ITEM48 APPS_TS_TX_DATA 'OE' 4
WF_ITEM49 APPS_TS_TX_DATA 'OF' 4
WF_ITEM50 APPS_TS_TX_DATA 'OK' 4
WF_ITEM75 APPS_TS_TX_DATA 'WI' 4
WF_ITEM76 APPS_TS_TX_DATA 'WS' 4
WF_ITEM77 APPS_TS_TX_DATA MAXVALUE 8
77 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_TYPE 1
1 row selected.
PPR1 sql> @q_tabsubpart wf_item_attribute_values WF_ITEM49
============= From DBA_Tab_SubPartitions : SubPartition Names / Tablespaces =============
Partition Name SUBPARTITION_NAME TS Name High Value High Val Len
WF_ITEM49 SYS_SUBP3326 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3328 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3332 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3331 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3330 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3329 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3327 APPS_TS_TX_DATA 0
WF_ITEM49 SYS_SUBP3325 APPS_TS_TX_DATA 0
8 rows selected.
============= From dba_part_key_columns : Partition Columns =============
NAME OBJEC Column Name COLUMN_POSITION
WF_ITEM_ATTRIBUTE_VALUES TABLE ITEM_KEY 1
1 row selected.
from DBA_Segments - just for partition WF_ITEM49 :
Segment Name TSname Partition Name Segment Type BLOCKS Mbytes EXTENTS Next Ext(Mb)
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3332 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3331 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3330 TblSubPart 16160 126.25 1010 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3329 TblSubPart 16112 125.875 1007 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3328 TblSubPart 16096 125.75 1006 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3327 TblSubPart 16224 126.75 1014 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3326 TblSubPart 16208 126.625 1013 .125
WF_ITEM_ATTRIBUTE_VALUES @TX_DATA SYS_SUBP3325 TblSubPart 16128 126 1008 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3332 IdxSubPart 59424 464.25 3714 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3331 IdxSubPart 59296 463.25 3706 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3330 IdxSubPart 59520 465 3720 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3329 IdxSubPart 59104 461.75 3694 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3328 IdxSubPart 59456 464.5 3716 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3327 IdxSubPart 60016 468.875 3751 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3326 IdxSubPart 59616 465.75 3726 .125
WF_ITEM_ATTRIBUTE_VALUES_PK @TX_IDX SYS_SUBP3325 IdxSubPart 59376 463.875 3711 .125
sum 4726.5
[the @ in the TS Name is my shortcode, as Apps stupidly prefixes every ts with "APPS_TS_"]
The Tablespaces used for all subpartitions are UNIFORM extent mgmt, AUTO segment_space_management ; LOCAL extent mgmt.regards
Ivan -
Story
I've been using arch for the past four months, dual booting with my old Windows XP. As I'm very fond of Flash games and make my own programs with a cross-platform language, I've found few problems with the migration. One of them was the Adobe Flash Player performance, which was stunningly bad. But everyone was saying that was normal, so I left it as is.
However, one special error always worried me: a seemingly randomly started siren sound coming from the motherboard speaker. Thinking it was a alarm about some fatal kernel error, I had been solving it mostly with reboots.
But then it happened. While playing a graphics intensive game on Windows quickly after rebooting from Arch, the same siren sound started. It felt like a slap across the face: it was not a kernel error, it was an motherboard overheat alarm.
The Problem
Since the computer was giving overheat signs, I started looking at things from another angle. I noticed that some tasks take unusually long times in Arch (i.e.: building things from source; Firefox / OpenOffice startup; any graphics intensive program), specially from the Flash Player.
A great example is the game Penguinz, that runs flawlessly in Windows but is unbearably slow in Arch. So slow that it alone caused said overheat twice. And trying to record another flash game's record using XVidCap things went so bad that the game halved the FPS and started ignoring key presses.
Tech Info
Dual Core 3.2 processor
1 gb RAM
256 mb Geforce FX 5500 video card
Running Openbox
Using proprietary NVIDIA driver
TL;DR: poor performance on some tasks. Flash Player is so slow that overheats CPU and makes me cry. It's fine on Windows.
Off the top of my head I can think up some reasons: bad video driver, unwanted background application messing up, known Flash Player performance problems and Actionscript Linux/Arch-only bug.
Where do you think is the problem?jwcxz wrote:Have you looked at your process table for any program with abnormal CPU usage? That seems like the logical place to start. You shouldn't be getting poor performance in anything with that system. I have a 2.0GHz Core 2 Duo and an Intel GMA 965 and I've never had any problems with Flash. It's much better than it used to be.
Pidgin scared me for a while because it froze for no apparent reason. After fixing this, the table contains this two guys here:
%CPU
Firefox: 80%~100%
X: 0~20%
Graphic intensive test, so I think the X usage is normal. It might be some oddity at the Firefox+Linux+Flash sum, maybe a conflict. I'll try another browser.
EDIT:
Did a Javascript benchmark to test both systems and browsers.
Windows XP + Firefox = 4361.4ms
Arch + Firefox = 5146.0ms
So, it's actually a lot slower without even taking Flash into account. If someone knows a platform-independent benchmark to test both systems completely, and not only the browser, feel free to point out.
I think that something is already wrong here and the lack of power saving systems only aggravated the problem, causing overheat.
EDIT2:
Browser performance fixed: migrated to Midori. Flash stills slower than on Windows, but now it's bearable. Pretty neat browser too, goes better with the Arch Way. It shouldn't fix the temperature, however.
Applied B's idea, but didn't test yet. I'm not into the mood of playing flash games for two straight hours today.
Last edited by BoppreH (2009-05-03 04:25:20) -
Poor Performance in ETL SCD Load
Hi gurus,
We are facing some serious performance problems during an UPDATE step, which is part of a SCD type 2 process for Assets (SIL_Vert/SIL_AssetDimension_SCDUpdate). The source system is Siebel CRM. The tools for ETL processing are listed below:
Informatica PowerCenter 9.1.0 HotFix2 0902 357 (R181 D90)
Oracle BI Data Warehouse Administration Console (Dac Build AN 10.1.3.4.1.patch.20120711.0516)
The OOTB mapping for this step is a simple SELECT command - which retrieves historical records from the Dimension to be updated - and the Target table (W_ASSET_D), with no UPDATE Strategy. The session is configured to always perform UPDATEs. We also have set $$UDATE_ALL_HISTORY to "N" in DAC: this way we are only selecting the most recent records from the Dimension history, and the only columns that are effectively updated are the system columns of SCD (EFFECTIVE_FROM_DT, EFFECTIVE_TO_DT, CURRENT_FLG, ...).
The problem is that the UPDATE command is executed individually by Informatica Powercenter, for each record in W_ASSET_D. For a number of 2.486.000 UPDATEs, we had ~2h of processing - a very poor performance for only one ETL step. Our W_ASSET_D has ~150M records today.
Some questions for the above:
- is this an expected average execution duration for this number of records?
- updates record by record are not optimal, this could be easily overcome by a BULK COLLECT/FORALL method. Is there a way to optimize the method used by Informatica or we need to write our own PL-SQL script and run it in DAC?
Thanks in advance,
GuilhermeHi,
Thank you for posting in Windows Server Forum.
Initially please check the configuration & requirement part for RemoteFX. You can follow below article for further research.
RemoteFX vGPU Setup and Configuration Guide for Windows Server 2012
http://social.technet.microsoft.com/wiki/contents/articles/16652.remotefx-vgpu-setup-and-configuration-guide-for-windows-server-2012.aspx
Hope it helps!
Thanks.
Dharmesh Solanki
TechNet Community Support -
URGENT: Migrating from SQL to Oracle results in very poor performance!
*** IMPORTANT, NEED YOUR HELP ***
Dear, I have a banking business solution from Windows/SQL Server 2000 to Sun Solaris/ORACLE 10g migrated. In the test environment everything was working fine. On the production system we have very poor DB performance. About 100 times slower than SQL Server 2000!
Environment at Customer Server Side:
Hardware: SUN Fire 4 CPU's, OS: Solaris 5.8, DB Oracle 8 and 10
Data Storage: Em2
DB access thru OCCI [Environment:OBJECT, Connection Pool, Create Connection]
Depending from older applications it's necessary to run ORACLE 8 as well on the same Server. Since we have running the new solution, which is using ORACLE 10, the listener for ORACLE 8 is frequently gone (or by someone killed?). The performance of the whole ORACLE 10 Environment is very poor. As a result of my analyse I figured out that the process to create a connection to the connection pool takes up to 14 seconds. Now I am wondering if it a problem to run different ORACLE versions on the same Server? The Customer has installed/created the new ORACLE 10 DB with the same user account (oracle) as the older version. To run the new solution we have to change the ORACLE environment settings manually. All hints/suggestions to solve this problem are welcome. Thanks in advance.
AntonOn the production system we have very poor DB performanceHave you identified the cause of the poor performance is not the queries and their plans being generated by the database?
Do you know if some of the queries appear to take more time than what it used to be on old system? Did you analyze such queries to see what might be the problem?
Are you running RBO or CBO?
if stats are generated, how are they generated and how often?
Did you see what autotrace and tkprof has to tell you about problem queries (if in fact such queries have been identified)?
http://download-west.oracle.com/docs/cd/B14117_01/server.101/b10752/sqltrace.htm#1052 -
Poor performance with Oracle Spatial when spatial query invoked remotely
Is anyone aware of any problems with Oracle Spatial (10.2.0.4 with patches 6989483 and 7003151 on Red Hat Linux 4) which might explain why a spatial query (SDO_WITHIN_DISTANCE) would perform 20 times worse when it was invoked remotely from another computer (using SQLplus) vs. invoking the very same query from the database server itself (also using SQLplus)?
Does Oracle Spatial have any known problems with servers which use SAN disk storage? That is the primary difference between a server in which I see this poor performance and another server where the performance is fine.
Thank you in advance for any thoughts you might share.OK, that's clearer.
Are you sure it is the SQL inside the procedure that is causing the problem? To check, try extracting the SQL from inside the procedure and run it in SQLPLUS with
set autotrace on
set timing on
SELECT ....If the plans and performance are the same then it may be something inside the procedure itself.
Have you profiled the procedure? Here is an example of how to do it:
Prompt Firstly, create PL/SQL profiler table
@$ORACLE_HOME/rdbms/admin/proftab.sql
Prompt Secondly, use the profiler to gather stats on execution characteristics
DECLARE
l_run_num PLS_INTEGER := 1;
l_max_num PLS_INTEGER := 1;
v_geom mdsys.sdo_geometry := mdsys.sdo_geometry(2002,null,null,sdo_elem_info_array(1,2,1),sdo_ordinate_array(0,0,45,45,90,0,135,45,180,0,180,-45,45,-45,0,0));
BEGIN
dbms_output.put_line('Start Profiler Result = ' || DBMS_PROFILER.START_PROFILER(run_comment => 'PARALLEL PROFILE')); -- The comment name can be anything: here it is related to the Parallel procedure I am testing.
v_geom := Parallel(v_geom,10,0.05,1); -- Put your procedure call here
dbms_output.put_line('Stop Profiler Result = ' || DBMS_PROFILER.STOP_PROFILER );
END;
SHOW ERRORS
Prompt Finally, report activity
COLUMN runid FORMAT 99999
COLUMN run_comment FORMAT A40
SELECT runid || ',' || run_date || ',' || run_comment || ',' || run_total_time
FROM plsql_profiler_runs
ORDER BY runid;
COLUMN runid FORMAT 99999
COLUMN unit_number FORMAT 99999
COLUMN unit_type FORMAT A20
COLUMN unit_owner FORMAT A20
COLUMN text FORMAT A100
compute sum label 'Total_Time' of total_time on runid
break on runid skip 1
set linesize 200
SELECT u.runid || ',' ||
u.unit_name,
d.line#,
d.total_occur,
d.total_time,
text
FROM plsql_profiler_units u
JOIN plsql_profiler_data d ON u.runid = d.runid
AND
u.unit_number = d.unit_number
JOIN all_source als ON ( als.owner = 'CODESYS'
AND als.type = u.unit_type
AND als.name = u.unit_name
AND als.line = d.line# )
WHERE u.runid = (SELECT max(runid) FROM plsql_profiler_runs)
ORDER BY d.total_time desc;Run the profiler in both environments and see if you can see where the slowdown exists.
regards
Simon -
Problem in DB Link creation ( Oracle warehouse builder 3i )
I am facing a problem in DB Link creation.
Backend: Oracle 8i Server on my machine
DW Software: Oracle warehouse builder 3i ( client , repository asistant.....)
Operating system: Windows NT 4 SERVICE PACK 6
I wants to use the scott database( default database given by oracle ) as my input source.
How can I create the DB LINK ( for scott database) ?
How can I create the DB LINK ( for any other database) ?
Should I need to add anything in Setting of"ODBC DATASOURCE ADMINISTRATION"
==================
Settings done:
==================
DB Link Name :scott
Host name
Host name: my machine's ip address
port number: 1521
oracle sid: prashant ( my oracle sid)
user name:scott
password:tiger
==================
Gives error:
==================
Testing...
Failed.
ORA-02085 Database link %s connects to %s
*Cause: a database link connected to a database with a different name.
The connection is rejected.
*Action: create a database link with the same name as the database it
connects to, or set global_names=false.
Please change it to false by doing :
first option:
Log in the database with DBA privilege and use the command:
alter system set GLOBAL_NAMES = false
second option:
Change the GLOBAL_NAMES to false in database system parameter file, init.ora
==================
Options tried:
==================
1. I tried to change GLOBAL_NAMES = false but still not able to create the DB LINK.
2. As per suggestion of one the friend
"A file named "Logon.Properties" under the directory $OWB_HOME/wbapp
in this file please set the property
OWBSingleUserLockUsage = false"
I tried the same but it is still not working.
How should I proceed further.
I am expecting URGENT FEEDBACK.
Reply me on : [email protected]
From
PrashantI solved the problem.
Procedure I followed :
UNINSTALL ORACLE WRAEHOUSE BUILDER SOFTAWARE.
'GLOBAL_NAMES = FALSE' in init.ora file.
RESTARTED MY MACHINE.
INSTALL THE ORACLE WRAEHOUSE BUILDER SOFTAWARE. -
Dear buddies,
I need to migrate my tables and data from SQL Server 2005 to Oracle 10g.
My boss's requirement is I must be able to redo it at any point of time. Meaning, I should be able to use the script at any point to recreate the tables and reload them independently without using SQL Server or the existing configurations.
Does OracleWarehouse Builder provide such scripts after the migration or in the process of migration, which can be reused independently?
There are about 7 SQL Server databases and each has around 270 tables. Some would be totally new and some would involve merging.
Please advice me.
Thanks.
NithHi everyone,
I tried to Just started like this
1 - Start - All programs - Oracle Home - Warehouse Builder - Administration - Repository Assitant - Basic Install -
2 - User Name=rep_user
password=rep_user
sysdba user name = sys
sysdba password = the password i gave during the database creation (using dbca)
hostname=localhost
port number = 1521
oracle service name = test6 (the name for the database & instance for which i just created)
When I click Next, this error pops us:
INS009: Unable to connect to the database with user SYS. jaba.sql.SQLException: IO Exception: The Network adapter could not establish the connection.
The remaining disk space for that drive is few gigs and I have created the database with the 16384.
Please advice on how I should go about this.
I have been struggling to perform my migration. I really hope someone would be able to guide me through.
OS: Windows 2003 Enter Edition
Oracle 10g
And I am not able to access anything from SQL Plus after I installed Oracle Warehouse Builder.
So, is the error I am facing something like to do with a bad installation? Do I need to uninstall my OWB or uninstall my oracle and OWB and reinstall them?
Please guide me.
Nith
Thanks a lot.
Nith -
Deploy on Warehouse builder hangs after validations sucessfull (ora-01017)
Hello,
i've a problem with deploying the mappings on warehouse builder.
I know what's causing it, but i cannot find out how can i fix it.
Ive a database registered as "target" (not same bd where OWB repository is), where i want to deploy the mappings, that database has been registered with and user (SISED).
All is working fine until the password of this user SISED has been changed. Know when i try to do the deploy OWB ask me for the autentification of the runtime repository (owbrt_user), then it sucessfully validate the mapping, but when it tries to deploy (generate the package) it fails with ORA-01017 - Invalid Username/Password.
This is because somewhere in owb repository, the password of user SISED remains the old one. I've try to put SISED password as the old one, and after it works.
So my question is how can i change the password registered on OWB for the target database's user SISED. I try to do it with OWB client, but i cannot see this option in any place.
My mail is: [email protected] (if need some more detail for help me).
Thanks in advance,
regards,
Pedro RibeiroHi,
Have you tried to re-register the location in the Control Center?
In Control Center locate the location, right click and choose register. Think this might be the case, unless I've totally misunderstood you.
regards Ragnar -
I am new to Warehouse Builder and relatively new to Oracle, but not to database/software. I am involved in a study trying to determine if OWB 10g might be useful in my current project. However, I am not having any luck in installing OWB.
I believe I am following the install direction correctly, but I find that I keep running into problems. The documentation says that only the 10g database needs to be installed, but I keep finding that other elements need to be installed in order for the OWB install to continue (e.g HTTP server, portal).
Can anyone provide (or point me to) clear step-by-step instruction on installing OWB 10g on a clean system (or on a system with ONLY the Oracle Database 10g installed).
Thank youGood morning Chuck,
I've performed a clean install of OWB10g on a basic 10g install on my laptop (no HTTP server etc) but no problems there.
Have you checked the Installation and Configuration Guide for basic requirements? Especially Chapter 1 and Appendix A.
Available at http://www.oracle.com/technology/documentation/warehouse.html
Cheers, Patrick
Maybe you are looking for
-
Automatic production order creation and confirmation
Dear all, I am in a situation where i have to configure the SAP system in the following ways: 1. From the planned orders automatically production order should be created today for the next day production and the end user will re-declare the quantity
-
Database control shows java.lang.Exception: IOException in connect refuse
Hi All, I need some help on this issue. I followed MOS Doc 259387.1 to reset dbsnmp password because its password is unknown. The error is Error java.lang.Exception: IOException in sending Request :: Connection refused Thanks in advance. leim
-
I get this error message when I try to email photos using iPhoto: Your email did not go through because the server did not reply. How can I get this to work when my internet is connected?
-
Required resources are missing. Can't reinstall CS3
I had to erase my HD. When I tried to reinstall CS# I keep getting a message "required resources are missing" I've tried from my DVD and from a fresh file (2.3GB ); no luck. Please help I tried to confirm the uninstall, but it was no help.
-
Hi there guys... I'm creating BOM by using BAPI_MATERIAL_BOM_GROUP_CREATE. I'm encoutering error such as: Error in checking the data of the object type=BOM / Id=SIMPLE1 of group Id=BAPI_SMP_COL1. Generation not Supported for change object Generation