Poor performance when xmlquery contains attribute matching XPATH
We send a select xmlquery to Oracle 11g database, if the query contains an XPATH that has an attribute name test in it, the performance is significantly worse than if indexing is used. Here are the two queries:
1)
select xmlquery('declare default element namespace "namespace"; for $e at $pos in $c/services/service let $result := if ($pos >= 2) then "|||||ERROR|||||" else $e return $result' passing document as "c" RETURNING CONTENT).getStringVal(), "LASTUPDATE" from services where id = :1
2)
select xmlquery('declare default element namespace "namespace"; for $e at $pos in $c/services/service[1] let $result := if ($pos >= 2) then "|||||ERROR|||||" else $e return $result' passing document as "c" RETURNING CONTENT).getStringVal(), "LASTUPDATE" from services where id = :1
The only difference is the two XPATHS:
1)$c/services/service
2)$c/services/service[1]
The second query performs 7X to 10X better than the first. More specifically, the Oracle DB CPU will reach 80-100% with only 40 tps of the first query but the second could hit 500 tps.
We would expect the first to perform slightly worse than the second, because Oracle would need to compare the uri values in the doc to the query. But, the documents being searched only contain one service/@uri.
Does anyone know why the performance is so bad with the first query? Is there anything we can do to tune the DB to help in this area. We need to be able to do the first query but the current CPU utilization is extremely too much.
sorry, number 1 should be $services/service(@uri="test"). That should be [ instead of (, the forum wasn't displaying it correctly above. It should be that way in the query as well.
Edited by: user6668496 on Mar 10, 2009 1:50 PM
Similar Messages
-
DOI - I_OI_SPREADSHEET, poor performance when reading more than 9999 record
Hi,
Please read this message in the [ABAP Performance and Tuning|DOI - I_OI_SPREADSHEET, poor performance when reading more than 9999 record; section and see if you have any advise.
Best Regards,
MarjoHi,
I met this issue when I tried to write vaule to massive fields of excel range.
And I solve this issue by using CL_GUI_FRONTEND_SERVICES=>CLIPBOARD_EXPORT.
So, I think you may fix it in the same way.
1. Select range in by the I_OI_SPREADSHEET
2. Call method I_OI_DOCUMENT_PROXY->COPY_SELECTION
3. Call method CL_GUI_FRONTEND_SERVICES=>CLIPBOARD_IMPORT
Cheers, -
Poor performance when dragging item within a list loaded with images - Flex 4
Hi,
I have a custom built List component that is using a TileLayout. I am using a custom itemRenderer to load images into this list from our server (in this test, 44 images rae loaded). I have enabled dragEnabled and dragMove so that I can move items around within the list. The problem comes when I start dragging an item. The dragging operation is very slow and clunky.
When I move the mouse to drag the item, the dropIndicator does not get refreshed for a few seconds and the movement feels like my PC is lagging pretty badly. I've also noticed that during this time my CPU usage is spiking up to around 25-40%. The funny part is I can scroll the list just fine without any lag at all, but the minute I drag the application starts to lag really bad. I do have some custom dragOver code that I used to override the dragOverHandler of the list control, but the problem persists even if I take that code out. I've tried with useVirtualLayout set to both true and false and neither setting made a difference.
Any ideas as to what could be causing the poor performance and/or how I can go abouts fixing it?
Thanks a lot in advance!Ahh, good call about the Performance profiler. I'm pretty new to the profiling thing with Flex (haven't used Builder Pro before
the Flex 4 beta) so please forgive me
I found some interesting things running the performance profiler but I'm not sure I understand what to make of it all. I cleared the Performance Profile data when right before I loaded the images into the list. I then moved some images around and then captured the Profiling Data (If I understand Adobe correctly, this is the correct way to capture performance information for a set of actions).
What I found is there is a [mouseEvent] item that took 3101ms with 1 "Calls" (!!!!). When I drill down into that item to see the Method Statistics, I actually see three different Callees and no callers. The sum of the time it took the Callees to execute does not even come close to adding up to the 3101 ms (about 40ms). I'm not sure what I can make of those numbers, or if they are even meaningful. Any insight into these?
The only other items that stand out to me are [pre-render] which has 863ms (Cumulative Time) / 639ms (Self Time), [enterFrameEvent] which has 746ms / 6ms (?!), and [tincan] (what the heck is tincan?) which has times of 521ms for both Cumulative and Self.
Can anyone offer some insight into these numbers and maybe make some more suggestions for me? I apologize for my ignorance on these numbers - as I said, I'm new to the whole Flex profiling thing.
Many thanks in advance!
Edit: I just did another check, this time profiling only from the start of my drag to the end of my drop, and I still see [mouseEvent] taking almost 1000ms of Cumulative Time. However, when I double click that item to see the Method Statistics, no Callers or Callees are listed. What's causing this [mouseEvent] and how come it's taking so long? -
Poor performance when accessing a MSAS 2008 cube via BI Server
I am having an issue with performance when accessing a MSAS 2008 cube via the BI Server.
I think it has to do with the way that the MDX is being generated.
Please could someone advise on any settings I could try or any known issues with the integration?
Thanks,
Kim Sarkin
[email protected]Hi Kim,
Sorry I can't help out, but, I'm curious how you got the AS 2008 drivers installed for use with OBIEE? I have an AS2008 environment, but, my OBIEE version only supports 2000 and 2005.
Thanks,
Mark -
Poor performance when using kde desktop effect
Hey,
I'm having trouble when using kde effet (system settings -> desktop -> desktop effects).
I have a dual core E5200 3 ghz, 2 GB memory pc8500 and an HD4850 using fglrx driver, but I got incredible bad performance when using desktop effect and watching video, I can barely watch a 800x600 video in full screen mode without having bad performance, X getting up to 40% cpu usage.
Its really like my graphic card isnt handling the rendering stuff but 3D acceleration is working, I can play 3D game without problem so far (as long as the deskstop effect arent enabled cause the cpu have like a hard time handling both for recent game)
So I guess its some trouble with 2D acceleration or something like that, I read that some people had such issue, but I didnt figure a way to fix it.
Here is my xorg.conf, in case something is wrong with it :
Section "ServerLayout"
Identifier "X.org Configured"
Screen 0 "aticonfig-Screen[0]-0" 0 0
InputDevice "Mouse0" "CorePointer"
InputDevice "Keyboard0" "CoreKeyboard"
EndSection
Section "Files"
ModulePath "/usr/lib/xorg/modules"
FontPath "/usr/share/fonts/misc"
FontPath "/usr/share/fonts/100dpi:unscaled"
FontPath "/usr/share/fonts/75dpi:unscaled"
FontPath "/usr/share/fonts/TTF"
FontPath "/usr/share/fonts/Type1"
EndSection
Section "Module"
Load "dri2"
Load "extmod"
Load "dbe"
Load "record"
Load "glx"
Load "dri"
EndSection
Section "InputDevice"
Identifier "Keyboard0"
Driver "kbd"
EndSection
Section "InputDevice"
Identifier "Mouse0"
Driver "mouse"
Option "Protocol" "auto"
Option "Device" "/dev/input/mice"
Option "ZAxisMapping" "4 5 6 7"
EndSection
Section "Monitor"
Identifier "Monitor0"
VendorName "Monitor Vendor"
ModelName "Monitor Model"
EndSection
Section "Monitor"
Identifier "aticonfig-Monitor[0]-0"
Option "VendorName" "ATI Proprietary Driver"
Option "ModelName" "Generic Autodetecting Monitor"
Option "DPMS" "true"
EndSection
Section "Device"
### Available Driver options are:-
### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
### [arg]: arg optional
#Option "ShadowFB" # [<bool>]
#Option "DefaultRefresh" # [<bool>]
#Option "ModeSetClearScreen" # [<bool>]
Identifier "Card0"
Driver "vesa"
VendorName "ATI Technologies Inc"
BoardName "RV770 [Radeon HD 4850]"
BusID "PCI:8:0:0"
EndSection
Section "Device"
Identifier "aticonfig-Device[0]-0"
Driver "fglrx"
BusID "PCI:8:0:0"
EndSection
Section "Screen"
Identifier "Screen0"
Device "Card0"
Monitor "Monitor0"
SubSection "Display"
Viewport 0 0
Depth 1
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 4
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 8
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 15
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 16
EndSubSection
SubSection "Display"
Viewport 0 0
Depth 24
EndSubSection
EndSection
Section "Screen"
Identifier "aticonfig-Screen[0]-0"
Device "aticonfig-Device[0]-0"
Monitor "aticonfig-Monitor[0]-0"
DefaultDepth 24
SubSection "Display"
Viewport 0 0
Depth 24
EndSubSection
EndSection
Thank you for any help.Section "Device"
### Available Driver options are:-
### Values: <i>: integer, <f>: float, <bool>: "True"/"False",
### <string>: "String", <freq>: "<f> Hz/kHz/MHz"
### [arg]: arg optional
#Option "ShadowFB" # [<bool>]
#Option "DefaultRefresh" # [<bool>]
#Option "ModeSetClearScreen" # [<bool>]
Identifier "Card0"
Driver "vesa"
VendorName "ATI Technologies Inc"
BoardName "RV770 [Radeon HD 4850]"
BusID "PCI:8:0:0"
EndSection
and
Section "Monitor"
Identifier "Monitor0"
VendorName "Monitor Vendor"
ModelName "Monitor Model"
EndSection
I see no reason for those to be there.
make a backup of your xorg.conf and remove / comment those lines. -
Poor query performance when joining CONTAINS to another table
We just recently began evaluating Oracle Text for a search solution. We need to be able to search a table that can have over 20+ million rows. Each user may only have visibility to a tiny fraction of those rows. The goal is to have a single Oracle Text index that represents all of the searchable columns in the table (multi column datastore) and provide a score for each search result so that we can sort the search results in descending order by score. What we're seeing is that query performance from TOAD is extremely fast when we write a simple CONTAINS query against the Oracle Text indexed table. However, when we attempt to first reduce the rows the CONTAINS query needs to search by using a WITH we find that the query performance degrades significantly.
For example, we can find all the records a user has access to from our base table by the following query:
SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID;
This query can execute in <100 ms. In the working example, this query returns around 1200 rows of the primary key duns_loc.
Our search query looks like this:
SELECT score(1), d.*
FROM duns d
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
The :search value in this example will be 'highway'. The query can return 246k rows in around 2 seconds.
2 seconds is good, but we should be able to have a much faster response if the search query did not have to search the entire table, right? Since each user can only "view" records they are assigned to we reckon that if the search operation only had to scan a tiny tiny percent of the TEXT index we should see faster (and more relevant) results. If we now write the following query:
WITH subset
AS
(SELECT d.duns_loc
FROM duns d
JOIN primary_contact pc
ON d.duns_loc = pc.duns_loc
AND pc.emp_id = :employeeID
SELECT score(1), d.*
FROM duns d
JOIN subset s
ON d.duns_loc = s.duns_loc
WHERE CONTAINS(TEXT_KEY, :search,1) > 0
ORDER BY score(1) DESC;
For reasons we have not been able to identify this query actually takes longer to execute than the sum of the durations of the contributing parts. This query takes over 6 seconds to run. We nor our DBA can seem to figure out why this query performs worse than a wide open search. The wide open search is not ideal as the query would end up returning records to the user they don't have access to view.
Has anyone ever ran into something like this? Any suggestions on what to look at or where to go? If anyone would like more information to help in diagnosis than let me know and i'll be happy to produce it here.
Thanks!!Sometimes it can be good to separate the tables into separate sub-query factoring (with) clauses or inline views in the from clause or an in clause as a where condition. Although there are some differences, using a sub-query factoring (with) clause is similar to using an inline view in the from clause. However, you should avoid duplication. You should not have the same table in two different places, as in your original query. You should have indexes on any columns that the tables are joined on, your statistics should be current, and your domain index should have regular synchronization, optimization, and periodically rebuild or drop and recreate to keep it performing with maximum efficiency. The following demonstration uses a composite domain index (cdi) with filter by, as suggested by Roger, then shows the explained plans for your original query, and various others. Your original query has nested loops. All of the others have the same plan without the nested loops. You could also add index hints.
SCOTT@orcl_11gR2> -- tables:
SCOTT@orcl_11gR2> CREATE TABLE duns
2 (duns_loc NUMBER,
3 text_key VARCHAR2 (30))
4 /
Table created.
SCOTT@orcl_11gR2> CREATE TABLE primary_contact
2 (duns_loc NUMBER,
3 emp_id NUMBER)
4 /
Table created.
SCOTT@orcl_11gR2> -- data:
SCOTT@orcl_11gR2> INSERT INTO duns VALUES (1, 'highway')
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact VALUES (1, 1)
2 /
1 row created.
SCOTT@orcl_11gR2> INSERT INTO duns
2 SELECT object_id, object_name
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> INSERT INTO primary_contact
2 SELECT object_id, namespace
3 FROM all_objects
4 WHERE object_id > 1
5 /
76027 rows created.
SCOTT@orcl_11gR2> -- indexes:
SCOTT@orcl_11gR2> CREATE INDEX duns_duns_loc_idx
2 ON duns (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> CREATE INDEX primary_contact_duns_loc_idx
2 ON primary_contact (duns_loc)
3 /
Index created.
SCOTT@orcl_11gR2> -- composite domain index (cdi) with filter by clause
SCOTT@orcl_11gR2> -- as suggested by Roger:
SCOTT@orcl_11gR2> CREATE INDEX duns_text_key_idx
2 ON duns (text_key)
3 INDEXTYPE IS CTXSYS.CONTEXT
4 FILTER BY duns_loc
5 /
Index created.
SCOTT@orcl_11gR2> -- gather statistics:
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'DUNS')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> EXEC DBMS_STATS.GATHER_TABLE_STATS (USER, 'PRIMARY_CONTACT')
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- variables:
SCOTT@orcl_11gR2> VARIABLE employeeid NUMBER
SCOTT@orcl_11gR2> EXEC :employeeid := 1
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> VARIABLE search VARCHAR2(100)
SCOTT@orcl_11gR2> EXEC :search := 'highway'
PL/SQL procedure successfully completed.
SCOTT@orcl_11gR2> -- original query:
SCOTT@orcl_11gR2> SET AUTOTRACE ON EXPLAIN
SCOTT@orcl_11gR2> WITH
2 subset AS
3 (SELECT d.duns_loc
4 FROM duns d
5 JOIN primary_contact pc
6 ON d.duns_loc = pc.duns_loc
7 AND pc.emp_id = :employeeID)
8 SELECT score(1), d.*
9 FROM duns d
10 JOIN subset s
11 ON d.duns_loc = s.duns_loc
12 WHERE CONTAINS (TEXT_KEY, :search,1) > 0
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 4228563783
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 2 | 84 | 121 (4)| 00:00:02 |
| 1 | SORT ORDER BY | | 2 | 84 | 121 (4)| 00:00:02 |
|* 2 | HASH JOIN | | 2 | 84 | 120 (3)| 00:00:02 |
| 3 | NESTED LOOPS | | 38 | 1292 | 50 (2)| 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 5 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | DUNS_DUNS_LOC_IDX | 1 | 5 | 1 (0)| 00:00:01 |
|* 7 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("D"."DUNS_LOC"="PC"."DUNS_LOC")
5 - access("CTXSYS"."CONTAINS"("D"."TEXT_KEY",:SEARCH,1)>0)
6 - access("D"."DUNS_LOC"="D"."DUNS_LOC")
7 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- queries with better plans (no nested loops):
SCOTT@orcl_11gR2> -- subquery factoring (with) clauses:
SCOTT@orcl_11gR2> WITH
2 subset1 AS
3 (SELECT pc.duns_loc
4 FROM primary_contact pc
5 WHERE pc.emp_id = :employeeID),
6 subset2 AS
7 (SELECT score(1), d.*
8 FROM duns d
9 WHERE CONTAINS (TEXT_KEY, :search,1) > 0)
10 SELECT subset2.*
11 FROM subset1, subset2
12 WHERE subset1.duns_loc = subset2.duns_loc
13 ORDER BY score(1) DESC
14 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- inline views (sub-queries in the from clause):
SCOTT@orcl_11gR2> SELECT subset2.*
2 FROM (SELECT pc.duns_loc
3 FROM primary_contact pc
4 WHERE pc.emp_id = :employeeID) subset1,
5 (SELECT score(1), d.*
6 FROM duns d
7 WHERE CONTAINS (TEXT_KEY, :search,1) > 0) subset2
8 WHERE subset1.duns_loc = subset2.duns_loc
9 ORDER BY score(1) DESC
10 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("PC"."DUNS_LOC"="D"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("TEXT_KEY",:SEARCH,1)>0)
5 - filter("PC"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- ansi join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 JOIN primary_contact
4 ON duns.duns_loc = primary_contact.duns_loc
5 WHERE CONTAINS (duns.text_key, :search, 1) > 0
6 AND primary_contact.emp_id = :employeeid
7 ORDER BY SCORE(1) DESC
8 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- old join:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns, primary_contact
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc = primary_contact.duns_loc
5 AND primary_contact.emp_id = :employeeid
6 ORDER BY SCORE(1) DESC
7 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 153618227
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -- in clause:
SCOTT@orcl_11gR2> SELECT SCORE(1), duns.*
2 FROM duns
3 WHERE CONTAINS (duns.text_key, :search, 1) > 0
4 AND duns.duns_loc IN
5 (SELECT primary_contact.duns_loc
6 FROM primary_contact
7 WHERE primary_contact.emp_id = :employeeid)
8 ORDER BY SCORE(1) DESC
9 /
SCORE(1) DUNS_LOC TEXT_KEY
18 1 highway
1 row selected.
Execution Plan
Plan hash value: 3825821668
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
| 0 | SELECT STATEMENT | | 38 | 1406 | 83 (5)| 00:00:01 |
| 1 | SORT ORDER BY | | 38 | 1406 | 83 (5)| 00:00:01 |
|* 2 | HASH JOIN SEMI | | 38 | 1406 | 82 (4)| 00:00:01 |
| 3 | TABLE ACCESS BY INDEX ROWID| DUNS | 38 | 1102 | 11 (0)| 00:00:01 |
|* 4 | DOMAIN INDEX | DUNS_TEXT_KEY_IDX | | | 4 (0)| 00:00:01 |
|* 5 | TABLE ACCESS FULL | PRIMARY_CONTACT | 4224 | 33792 | 70 (3)| 00:00:01 |
Predicate Information (identified by operation id):
2 - access("DUNS"."DUNS_LOC"="PRIMARY_CONTACT"."DUNS_LOC")
4 - access("CTXSYS"."CONTAINS"("DUNS"."TEXT_KEY",:SEARCH,1)>0)
5 - filter("PRIMARY_CONTACT"."EMP_ID"=TO_NUMBER(:EMPLOYEEID))
SCOTT@orcl_11gR2> -
I'm saddly disappointed with my new Lenovo performance and don't know what might be the cause.
I just got a new T510 with Core i5 processor, 4GB RAM and Windows 7 64. When I run a couple of programs, like Photoshop, a Virtualbox (850 MB of RAM allocated), Google Chrome with a couple of tabs I get stuck.
It seems that 4GB of memory is not enough for the work I did on a 2GB machine a week ago.
The computer's response time is a couple of seconds for window switching. New programs start counting in minutes. RAM consumption reaches 98%, even though no single program is listed to consume more than a 150MB in either Task Manager or Resource monitor. Aero effects work cool, though, there's no sloppiness in the graphical layer of the system.
Even now, after having Virtualbox killed, remaining processes take up 89% of RAM memory in Resource manager.
I'm working on AC power supply with "Perfomance" power setting, but I can hardly call it a performance.
This is ridiculous, it was supposed to be a fast multitasking machine and it turns out to be a load of junk. I used to work on 2GB, 1.6GHz Celeron with Intel GMA processor, had Ubuntu host and Windows XP guest with 750MB RAM allocated plus a couple of programming tools running and never had such issues _on regular basis_.
I'm considering running a Ubuntu on this machine because I simply can't work this way on Windows 7... I can't even kill a process easily because it takes ages to End a process.
A friend of mine is running similar setup on a T410 machine with slower CPU and poorer graphics and he says this is really strange, because he never had such problems. My PC is new, 4 days since arrival, his is like 2-months old.
Has anyone experienced similar problem? What might be the cause?
At this point I have Chrome with ~10 tabs, empty Firefox, empty Photoshop, Pidgin, and 3 explorer windows running, Adobe Reader with 1 document and Sticky Notes. RAM consumption is 3495MB in use (yeah right), 276MB standby and 113 free.
What's wrong here?!
Please see the screenshot attached:
http://img715.imageshack.us/i/captureybh.png/
Solved!
Go to Solution.Well, the issue is solved. After removing bundled Norton Internet Security suite it runs like gold-plated.
I'll get another anti-virus software, then.
Now, even with the Virtualbox running, I still have like 2GB free RAM. And no lags, no slow-downs. Woohoo.
**bleep** you Norton! -
BIBean poor performance when using Query.setSuppressRows()
Does anyone have experience in suppressing N/A cell values using BIBean? I was experimenting the use of Query.setSuppressRows(DataDirector.NA_SUPPRESSION). It does hide the rows that contains N/A values in a crosstab.
The problem is that the performance degrades significantly when I started drilling down the hierarchy. Without calling the method, I was able to drill into the pre-aggregated hierarchy in a few seconds. But with setSuppressRows(), it took almost 15 minutes to do the same drill.
Just for a comparison, I used DML to report on the same data that I wanted to drill into. With either 'zerorow' to yes or no, the data was fetched less than a second.
Thanks for any help.
- WeiAt the moment we are hoping this will be fixed in a 10g database patch which is due early 2005. However, if you are using an Analytic Workspace then you could use OLAP DML to filter the zero and NA rows before they are returned to the query. I think this involves modifying the OLAP views that return the AW objects via SQL commands.
Hope this helps
Business Intelligence Beans Product Management Team
Oracle Corporation -
Poor performance when Distinct and Order By Used
Hello,
I am getting an slow answer when I add Distinct and Order By to the query:
Without Distinct and Order By lasts 3.57 seconds; without Distinct and Order By lasts 28.15 seconds, which it's too much for our app.
The query is:
select distinct CC.acceso, CC.ext_acceso, TIT.TITULO_SALIDA
from (((Ocurrencias CT01 inner join
palabras p0 on (CT01.cod_palabra = p0.cod_palabra and p0.palabra like 'VENEZUELA%' AND p0.campo = 'AUTOR')) INNER JOIN
CENTRAL CC ON (CT01.ACCESO = CC.ACCESO AND CT01.EXT_ACCESO = CC.EXT_ACCESO))) inner join
codtit ctt on (CC.acceso = ctt.acceso and CC.ext_acceso = ctt.ext_acceso) inner join
titulos tit on (ctt.cod_titulo = tit.cod_titulo and ctt.portada = '1')
where CC.nivel_reg <> 's'
ORDER BY 3 ASC;
The query plan for the query WITH Distinct and Order By is:
Elapsed: 00:00:28.15
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=301 Card=47 Bytes=12220)
1 0 SORT (ORDER BY) (Cost=301 Card=47 Bytes=12220)
2 1 SORT (UNIQUE) (Cost=300 Card=47 Bytes=12220)
3 2 NESTED LOOPS (Cost=299 Card=47 Bytes=12220)
4 3 NESTED LOOPS (Cost=250 Card=49 Bytes=4165)
5 4 NESTED LOOPS (Cost=103 Card=49 Bytes=2989)
6 5 NESTED LOOPS (Cost=5 Card=49 Bytes=1960)
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'PALABRAS' (TABLE) (Cost=3 Card=1 Bytes=19)
8 7 INDEX (RANGE SCAN) OF 'PALABRA' (INDEX (UNIQUE)) (Cost=2 Card=1)
9 6 INDEX (RANGE SCAN) OF 'PK_OCURRENCIAS' (INDEX (UNIQUE)) (Cost=2 Card=140 Bytes=2940)
10 5 TABLE ACCESS (BY INDEX ROWID) OF 'CENTRAL' (TABLE) (Cost=2 Card=1 Bytes=21)
11 10 INDEX (UNIQUE SCAN) OF 'PK_CENTRAL' (INDEX (UNIQUE)) (Cost=1 Card=1)
12 4 TABLE ACCESS (BY INDEX ROWID) OF 'CODTIT' (TABLE) (Cost=3 Card=1 Bytes=24)
13 12 INDEX (RANGE SCAN) OF 'PK_CODTIT' (INDEX (UNIQUE)) (Cost=2 Card=1)
14 3 TABLE ACCESS (BY INDEX ROWID) OF 'TITULOS' (TABLE) (Cost=1 Card=1 Bytes=175)
15 14 INDEX (UNIQUE SCAN) OF 'PK_TITULOS' (INDEX (UNIQUE)) (Cost=0 Card=1)
Statistics
154 recursive calls
0 db block gets
32070 consistent gets
1622 physical reads
0 redo size
305785 bytes sent via SQL*Net to client
2807 bytes received via SQL*Net from client
212 SQL*Net roundtrips to/from client
10 sorts (memory)
0 sorts (disk)
3149 rows processed
The query plan for the query WITHOUT Distinct and Order By is:
Elapsed: 00:00:03.57
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=299 Card=47 Bytes=12220)
1 0 NESTED LOOPS (Cost=299 Card=47 Bytes=12220)
2 1 NESTED LOOPS (Cost=250 Card=49 Bytes=4165)
3 2 NESTED LOOPS (Cost=103 Card=49 Bytes=2989)
4 3 NESTED LOOPS (Cost=5 Card=49 Bytes=1960)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'PALABRAS' (TABLE) (Cost=3 Card=1 Bytes=19)
6 5 INDEX (RANGE SCAN) OF 'PALABRA' (INDEX (UNIQUE)) (Cost=2 Card=1)
7 4 INDEX (RANGE SCAN) OF 'PK_OCURRENCIAS' (INDEX (UNIQUE)) (Cost=2 Card=140 Bytes=2940)
8 3 TABLE ACCESS (BY INDEX ROWID) OF 'CENTRAL' (TABLE) (Cost=2 Card=1 Bytes=21)
9 8 INDEX (UNIQUE SCAN) OF 'PK_CENTRAL' (INDEX (UNIQUE)) (Cost=1 Card=1)
10 2 TABLE ACCESS (BY INDEX ROWID) OF 'CODTIT' (TABLE) (Cost=3 Card=1 Bytes=24)
11 10 INDEX (RANGE SCAN) OF 'PK_CODTIT' (INDEX (UNIQUE)) (Cost=2 Card=1)
12 1 TABLE ACCESS (BY INDEX ROWID) OF 'TITULOS' (TABLE) (Cost=1 Card=1 Bytes=175)
13 12 INDEX (UNIQUE SCAN) OF 'PK_TITULOS' (INDEX (UNIQUE)) (Cost=0 Card=1)
Statistics
3376 recursive calls
0 db block gets
33443 consistent gets
1061 physical reads
0 redo size
313751 bytes sent via SQL*Net to client
2807 bytes received via SQL*Net from client
422 SQL*Net roundtrips to/from client
90 sorts (memory)
0 sorts (disk)
3149 rows processed
I would appreciate a lot if somebody can tell me how to improve the performance of the query with Distinct and Order By.
Thank you very much,
Icaro Alzuru C.Hello,
I am getting an slow answer when I add Distinct and Order By to the query:
Without Distinct and Order By lasts 3.57 seconds; without Distinct and Order By lasts 28.15 seconds, which it's too much for our app.
The query is:
select distinct CC.acceso, CC.ext_acceso, TIT.TITULO_SALIDA
from (((Ocurrencias CT01 inner join
palabras p0 on (CT01.cod_palabra = p0.cod_palabra and p0.palabra like 'VENEZUELA%' AND p0.campo = 'AUTOR')) INNER JOIN
CENTRAL CC ON (CT01.ACCESO = CC.ACCESO AND CT01.EXT_ACCESO = CC.EXT_ACCESO))) inner join
codtit ctt on (CC.acceso = ctt.acceso and CC.ext_acceso = ctt.ext_acceso) inner join
titulos tit on (ctt.cod_titulo = tit.cod_titulo and ctt.portada = '1')
where CC.nivel_reg <> 's'
ORDER BY 3 ASC;
The query plan for the query WITH Distinct and Order By is:
Elapsed: 00:00:28.15
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=301 Card=47 Bytes=12220)
1 0 SORT (ORDER BY) (Cost=301 Card=47 Bytes=12220)
2 1 SORT (UNIQUE) (Cost=300 Card=47 Bytes=12220)
3 2 NESTED LOOPS (Cost=299 Card=47 Bytes=12220)
4 3 NESTED LOOPS (Cost=250 Card=49 Bytes=4165)
5 4 NESTED LOOPS (Cost=103 Card=49 Bytes=2989)
6 5 NESTED LOOPS (Cost=5 Card=49 Bytes=1960)
7 6 TABLE ACCESS (BY INDEX ROWID) OF 'PALABRAS' (TABLE) (Cost=3 Card=1 Bytes=19)
8 7 INDEX (RANGE SCAN) OF 'PALABRA' (INDEX (UNIQUE)) (Cost=2 Card=1)
9 6 INDEX (RANGE SCAN) OF 'PK_OCURRENCIAS' (INDEX (UNIQUE)) (Cost=2 Card=140 Bytes=2940)
10 5 TABLE ACCESS (BY INDEX ROWID) OF 'CENTRAL' (TABLE) (Cost=2 Card=1 Bytes=21)
11 10 INDEX (UNIQUE SCAN) OF 'PK_CENTRAL' (INDEX (UNIQUE)) (Cost=1 Card=1)
12 4 TABLE ACCESS (BY INDEX ROWID) OF 'CODTIT' (TABLE) (Cost=3 Card=1 Bytes=24)
13 12 INDEX (RANGE SCAN) OF 'PK_CODTIT' (INDEX (UNIQUE)) (Cost=2 Card=1)
14 3 TABLE ACCESS (BY INDEX ROWID) OF 'TITULOS' (TABLE) (Cost=1 Card=1 Bytes=175)
15 14 INDEX (UNIQUE SCAN) OF 'PK_TITULOS' (INDEX (UNIQUE)) (Cost=0 Card=1)
Statistics
154 recursive calls
0 db block gets
32070 consistent gets
1622 physical reads
0 redo size
305785 bytes sent via SQL*Net to client
2807 bytes received via SQL*Net from client
212 SQL*Net roundtrips to/from client
10 sorts (memory)
0 sorts (disk)
3149 rows processed
The query plan for the query WITHOUT Distinct and Order By is:
Elapsed: 00:00:03.57
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=299 Card=47 Bytes=12220)
1 0 NESTED LOOPS (Cost=299 Card=47 Bytes=12220)
2 1 NESTED LOOPS (Cost=250 Card=49 Bytes=4165)
3 2 NESTED LOOPS (Cost=103 Card=49 Bytes=2989)
4 3 NESTED LOOPS (Cost=5 Card=49 Bytes=1960)
5 4 TABLE ACCESS (BY INDEX ROWID) OF 'PALABRAS' (TABLE) (Cost=3 Card=1 Bytes=19)
6 5 INDEX (RANGE SCAN) OF 'PALABRA' (INDEX (UNIQUE)) (Cost=2 Card=1)
7 4 INDEX (RANGE SCAN) OF 'PK_OCURRENCIAS' (INDEX (UNIQUE)) (Cost=2 Card=140 Bytes=2940)
8 3 TABLE ACCESS (BY INDEX ROWID) OF 'CENTRAL' (TABLE) (Cost=2 Card=1 Bytes=21)
9 8 INDEX (UNIQUE SCAN) OF 'PK_CENTRAL' (INDEX (UNIQUE)) (Cost=1 Card=1)
10 2 TABLE ACCESS (BY INDEX ROWID) OF 'CODTIT' (TABLE) (Cost=3 Card=1 Bytes=24)
11 10 INDEX (RANGE SCAN) OF 'PK_CODTIT' (INDEX (UNIQUE)) (Cost=2 Card=1)
12 1 TABLE ACCESS (BY INDEX ROWID) OF 'TITULOS' (TABLE) (Cost=1 Card=1 Bytes=175)
13 12 INDEX (UNIQUE SCAN) OF 'PK_TITULOS' (INDEX (UNIQUE)) (Cost=0 Card=1)
Statistics
3376 recursive calls
0 db block gets
33443 consistent gets
1061 physical reads
0 redo size
313751 bytes sent via SQL*Net to client
2807 bytes received via SQL*Net from client
422 SQL*Net roundtrips to/from client
90 sorts (memory)
0 sorts (disk)
3149 rows processed
I would appreciate a lot if somebody can tell me how to improve the performance of the query with Distinct and Order By.
Thank you very much,
Icaro Alzuru C. -
Poor performance when running DBMS_STATS package
Hello,
I thought I would throw this issue out to the general population as I am stumped.
I have two identical schemas on two different severs. Database in 9.2.0.6 and for QAS and Production reasons the servers and database settings are identical.
I am running the DBMS_STATS.GATHER_SCHEMA_STATS package on both databases. On the QAS DB it takes roughly 1.5 hours while on the Production database it takes 8.0 hours.
Looking at the statspack analysis, the biggest thing to jump out at me is that the temporary tablespace is getting 4.6 Million I/O hits (reads) while the QAS is only getting a few thousand.
So .... this sort of indicates that something is going on in the PGA which could cause Oracle to use the TEMP tablespace. However when I look in the host performance (using Grid Control) the servers memory, file I/O, and CPU while high are not over 80%.
Has anyone else seen and issue like this? Any ideas where to look?
Thanks,
ChrisTo the extent that you are gathering histograms on the data, Oracle potentially has to do a reasonable amount of sorting. If one machine has substantially more physical RAM, that presumably implies that it has more RAM available for the PGA, which would allow more sorts to happen in memory rather than on disk.
Depending on how you create a "refresh copy of production" and what parameters you are passing in to DBMS_STATS, it is also possible that the quality assurance database is gathering substantially more histograms than the production database (i.e. you refresh lower environments via export and import rather than doing an RMAN clone and tell Oracle to gather histograms on any columns that currently have histograms).
Justin -
Poor performance when using skip-level hierarchies
Hi there,
currently we have big performance issues when drilling in a skip-level hierarchy (each drill takes around 10 seconds).
OBIEE is producing 4 physical SQL statements when drilling f.e. into the 4th level (for each level one SQL statement). The statements runs in parallel and are pretty fast (the database doesn't need more than 0,5 seconds to produce the result), but ... and here we have probably somewhere a problem ... putting all the 4 results together in OBIEE takes another 8 seconds.
This are not big datasets the database is returning - around 5-20 records for each select statement.
The question is: why does it take so long to put the data together on the server? Do we have to reconfigure some parameters to make it faster?
Please guide.
Regards,
RafaelIf you really and exclusively want to have "OBIEE can handle such queries" - i.e. not touch the database, then you had best put a clever caching strategy in place.
First angle of attack should be the database itself though. Best sit down with a data architect and/or your DBA to find the best setup possible physically and then when you've optimized that (with regard to the kind of queries emitted against it) you can move up to the OBIS. Always try t fix the issue as close to the source as possible. -
Is anyone able to explain really poor performance when using 'If Exists'?
Hello all. We've recently had a performance spike when using the 'if exists' construct, which is a construct that we use through much of our code. The problem is, it appears illogical since if can be removed via a tiny modification that does
not change the core code.
I can demonstrate
This is the (simplified) format of the base original code. It's purpose is to identify when a column value has changed comparing to a main table and a complex view
select 1 from MainTable m
inner join ComplexView v on m.col2 = v.col2
where m.col3 <> v.col3
This is doing a table scan, however the table only has 17000 rows while the view only has 7000 rows. The sql executes in approximately 3 seconds.
However if we add the 'If Exists' construct around the original query, like such:
if exists (
select 1 from MainTable m
inner join ComplexView v on m.col2 = v.col2
where m.col3 <> v.col3
print 1
The sql now takes over 2 minutes to run. Note that the core sql is unchanged. All I have done is wrap it with 'If Exists'
I can't fathom why the if exists construct is taking so much longer, especially since the code code is unchanged, however more importantly I would like to understand why since we commonly use this type of syntax.
Any advice would be greatly appreciatedOK, that's interesting. Adding the top 1 clause greatly affects the performance (in a bad way).
The original query (as below) still runs in a few seconds.
select 1 from MainTable m
inner join ComplexView v on m.col2 = v.col2
where m.col3 <> v.col3
The 'Top 1' query (as below) takes almost 2 minutes however. It's exactly the same query, but with 'top 1' added to it.
select top 1 1 from MainTable m
inner join ComplexView v on m.col2 = v.col2
where m.col3 <> v.col3
I suspect that the top 1 is performing a very similar operation to the exists, in that it is 'supposed' to exit as soon as it finds a single row that satisfies it's results.
It's still not really any closer to making me understand what is causing the issue however. -
[SOLVED] Older NVIDIAs - (quite poor) performance when watching movies
I'd like to ask what packages and configuration you use to achieve the best performance for NVIDIA GeForce FX 5200, mainly for watching movies. I've come to this observation:
Global parameters
NVIDIA GeForce FX 5200 (AGP 4x, 128 MB)
1680x1050, 60 Hz
Pentium 4 2.4 GHz, 1 GB RAM
SMPlayer 0.6.9 (SVN r3447)
Used video: 1280 x 720, 23.976 FPS, fullscreen
Windows XP (SP2):
Driver: 16. 5. 2008 (version 6.14.11.7519)
MPlayer SVN r30369
Output method: directx(fast)
Video: fluent
Ubuntu 11.04:
Driver: nvidia-173 (probably, Ubuntu is so confusing)
MPlayer 1.0rc4-4.5.2
Output method: xv
Video: fluent but often with a diagonal break
Arch Linux:
Driver: nouveau
MPlayer SVN r34426
Output method: xv
Video: goes to few FPS after a minute
Last edited by nemamradfazole (2011-12-26 00:02:23)I'm not an A/v guy, but I think you can encode stuff in different ways and that might make a difference, so just because I haven't had any problems with 720p doesn't mean you're doing something wrong, maybe you're viewing higher-quality material.
I don't get choppy video even when I'm maxing-out the cpu.
mplayer --framedrop -vfm ffmpeg -lavdopts lowres=1:fast:skiploopfilter=all
is the magic incantation I use to view some 720p stuff on my other PC which has just and old, unsupported integrated Intel card. I get "almost-fluent but often with a diagonal break" video.
Yes, this not 30fps 720p anymore, so I'm cheating ;P
nemamradfazole wrote:I am curious, can you watch 1080p as well?
Not really. -
Poor Performance when using Question Pooling
I'm wondering if anyone else out there is experiencing
Captivate running very slow when using question pooling. We have
about 195 questions with some using screenshots in jpeg format.
By viewing the Windows Task Manager, CP is using anywhere
between 130 to 160 K worth of memory. What is going on here? It's
hammering the system pretty hard. It takes a large effort just to
reposition the screenshot or even move a distractor.
I'm running this on a 3.20GHz machine with 3GB of RAM.
Any Captivate Gurus out there care to tackle this one?
Help.MtnBiker1966,
I have noticed the same problem. I only have 60 slides with
43 questions and the Question Pool appears to be a big drain on
performance. Changing the buttons from Continue to Go to next slide
helped a little, but performance still drags compared to not using
a question pool. I even tried reducing the number of question
pools, but that did not affect the performance any. The search
continues.
Darin -
Incredibly poor performance when installing software (HDD problem?)
The computer was super slow (slower than any Mac I've ever owned) when installing software. The hard drive seemed to just space out (time out). Activity Monitor showed zero or little activity while installing apps from a DVD.
So I decided to format the HDD and reinstall everything.
I finished reinstallign the OS, and now I am using the DVD labeled "Bundled Applications" to install iLife and it's been over one hour and it has only progressed about 25%. The setup assistant says that the estimated time will be one hour and 44 minutes!!!
I changed the Power Options to not turn off the hard drive whenever possible, just in case, but it just doesn't seem to be doing anything at all (same problem that made me want to format and reinstall from scratch).
Why is this so slow? Anybody?
Thanks in advance.
Message was edited by: Alexkywalker
Message was edited by: AlexkywalkerHere's some more useful information that I should have mentioned: I have 2 external hard drives attached to the computer via USB and FireWire 800.
Before installing I ejected the hard drives but did NOT unplug them.
I noticed that while installing some other software the Apple Installer which is used by all installations (aka Setup Assistant) automatically re-mounted the hard drives I ejected, so I unplugged them and the installation process sped up to normal.
So I think this is an issue with the Apple Installer searching for information in all drives connected to the machine.
I will try again tonight with the Bundled Software and see.
Maybe you are looking for
-
My apple TV will not let me purchase items direct from iTunes and I am logged in. why?
I have nmot been able to rent a movie from my apple TV. I am logged into iTunes. When I try and rent a movie it tells me that I need to varify payment on my computer. I have then rented a movie from my PC but this will not show up on my apple TV, an
-
Gracefully throwing a error when the web service is down
Guys, I have web service calls in my application. (through web service data control). My Appliacation throws exception when the web service is down. i want to my application to throw some meaningful error on the UI. How this can be acheived?
-
in scripts : when i wrie a stmt in editor, eg: /: IF <> OR <> /: OR <> ( IF STMT EXTENDING TO NEXT LINE, WHAT IS COMMAND IS IT /: OR SPACE OR =) HELP ME OUT PLS
-
Power \ lock key not working on lumia 520
My power key suddenly stopped working on Lumia 520 should I go to Nokia care center?
-
Signature Tool Won't Work in Reader XI
I need to fill out governemnt forms & upload their webpage made me download Adobe Reader XI (standard & i am on Lion OS) and the forms are filled in within their browser. I can fill in the form fields as required, but am unable to use the 'SIGNATURE