Question about PARALLEL hint
the documentation says that PARALLEL hint can by used in select statements like PARALLEL(emp 8). I have also seen syntaxes like PARALLEL, PARALLEL 8, PARALLEL(8), and in each case the steps in execution plans were the same but the cost assigned was a bit different between each version (but significantly lower than for a query without this hint).
how does oracle interpret above three syntaxes, ie. /*+ PARALLEL */, /*+ PARALLEL 8 */, /*+ PARALLEL(8) */?
are there any parts in those syntaxes that are ignored or incorporate default behaviour?
what syntax is safe to use when one doesn't want to specify the table being parallelized, ie. use syntax like PARALLEL(emp 8)?
thank you
Edited by: 943276 on 2012-07-25 19:17
943276 wrote:
the documentation says that PARALLEL hint can by used like PARALLEL(emp 8). I have also seen syntaxes like PARALLEL, PARALLEL 8, PARALLEL(8), and in each case the steps in execution plans were the same but the cost assigned was different between each version (and significantly lower than for a query without this hint).
how does oracle interpret above three syntaxes, ie. /*+ PARALLEL */, /*+ PARALLEL 8 */, /*+ PARALLEL(8) */?
are there any parts in those syntaxes that are ignored or incorporate default behaviour?
what syntax is safe to use when one doesn't want to specify the table being parallelized, ie. use syntax like PARALLEL(emp 8)?
thank you
Edited by: 943276 on 2012-07-25 19:17You are NOT qualified to use any HINT.
Oracle recommends against using any HINT.
Similar Messages
-
Question about Parallels and ATI Radeon 5770
Ok, so I have a Mac Pro 3,1 here. I'm upgrading it to 16GB and two 7200 RPM 3TB drives which I'll run in Raid 0.
It has an ATI 2600HD with 256MB VRAM. This card has two DVI ports.
I just purchased an ATI Radeon 5770. I will use it with two Cinema Displays (DVI connections).
To connect the second display, I have the Apple Mini DisplayPort to Dual-Link DVI Adapter at Firmware 1.03.
My question concerns Parallels 8.
My Windows 7 VM will be a heavy-use VM so I want snappy performance. I was planning to dedicate 8GB RAM, half my CPU cores to it, and 512MB VRAM on the 5770.
What I was wondering about was the possibility of leaving in the 2600 HD and using its GPU to support my Parallels VM. Is that feasible? Practical?
ThanksDump the 2600XT (put it on the shelf, don't use it)
Install Windows natively first, then you can use it both ways, natively and dual boot, plus in a VM when that works.
Another option is 10.8.2 supports more Nvidia cards and more then the 1GB.
Don't use green 3TB in an array. -
A Few Questions About Parallels 5 and Windows 7
Hey guys!
So, I am taking a computer programming class that requires Windows, just so we can all be on the same screen at the same time (lame, I know). Anywho, I have a Mac, so my teacher provided me with a log-in to MSDNAA, to get Windows 7 for free, and he gave me Parallels 5.0 for free as well. (go community colleges!)
A few questions:
1- I am running Snow Leopard right now on my 2009 MacBook Pro, and my processor is the 64-bit processor, but after running the command uname -a I discovered that my Kernal is loading in the 32-bit mode. Is this normal? Shouldn't it be running in 64-bit mode?
2- Which Windows 7 should I get, the x64 or x86 (32-bit or 64-bit)?
Thanks for any/all help!
Model Name: MacBook Pro
Model Identifier: MacBookPro5,4
Processor Name: Intel Core 2 Duo
Processor Speed: 2.53 GHz
Number Of Processors: 1
Total Number Of Cores: 2
L2 Cache: 3 MB
Memory: 4 GB
Chipset Model: NVIDIA GeForce 9400M
Type: GPU
Bus: PCI
VRAM (Total): 256 MB
Terminal reply after command:
local 10.4.0 Darwin Kernel Version 10.4.0: Fri Apr 23 18:28:53 PDT 2010; root:xnu-1504.7.4~1/RELEASE_I386 i386
the i386 means it's in 32-bit mode, right?Try the Windows forum area - http://discussions.apple.com/forum.jspa?forumID=687
-
Question about Parallels using Bootcamp partition
I was about to install XP on bootcamp but i would mainly be using parallels to run XP from the bootcamp partition. What i was wondering is that if i was to make any changes to xp like install a software and create a new folder through parallels, would it reflect on the actual bootcamp partition when i load into the actual XP partition using bootcamp? Basically, do the changes made to windows using parallels carry over to the bootcamp partition given you are using the bootcamp partition on parallels?
ThanksThanks a lot. While i am at it, if i use the bootcamp partition with Parallels, i cannot suspend the VM right?
Parallels will behave the same, whether it has its own copy of Windows, or is using that on the Boot Camp partition.
The only difference is backup. If you use Parallels with Windows in a disk image file (Preferably a sparse bundle), Windows can be backed up with Time Machine. If Windows is on its own partition, Time Machine can't handle it. -
Question about parallels desktop
Hello!
I have a little problem with my Parallels Desktop. I have installed a windows xp operating system with parallels desktop, but the virtual video card's memory is only 256Mb. In Mac OS X i have 2 video cards. One with 256Mb memory and one with 512Mb memory. Maybe can you help me how to solve this problem? Thanks in advance!:)Hi Tojas;
I would certainly hope that if you told the folks at Parallels that you were interested enough in their code to try a demo but that you were having problems with it that they would help you.
I would start at this site
http://forum.parallels.com/forumdisplay.php?f=58
Allan
Message was edited by: Allan Eckert -
Currently running Oracle 8.1.7.3 on 32 CPU HP box. I'm doing some data analysis and ran in to some inconsistencies on what columns parallel query works on. For most columns it kicks off all the threads(ora_pxxx) but on a few of the columns, it will only run 1 thread. These queries run full table scans.
Example SQL:
Select /* parallel (column1, 10) */ column1
from table1
I can't seem to find much documentation on parallel queries.sorry. correction on the SQL format:
Select /*+ parallel(tb1, 20)*/ column1
from table1 tb1 -
I already have windows vista installed on my mac via bootcamp, but i would like to install parallels so i dont have to keep restarting. Is it possible to install parallels with my bootcamp partition, or do i need to reinstall windows?
Hi Adam,
Parallels can use a BootCamp Windows partition for creating a Parallels Virtual Machine.
Check the online documentation avaiable at their website for how-to.
regards
Stefan -
Questions about parallel and series sturcture in SCTL (single clock timed loop) in FPGA VI
I am using LV 8.2.
I have wriiten a FPGA VI, in this VI, there are 3 filters inside the SCTL (single clock timed loop) (I use the shift register to be the effect of unit delay). Would I save the resources of the FPGA used (such as slices and LUT) in the compliation report if I use 3 SCTL ( 1 SCTL contain 1 filter) and cascade them in series?
Thank you!Sorry, I am afraid that you have misunderstood my meaning.
Here is the method you suggested before:
I mean I use the shift registers attached in the while loop, not in the SCTL. The number of shift registers used will not be decreased when we increase the number of SCTL in the second plate of the flat sequence structure. Totally , there are 6 pairs shift registers for 3 filters (1 filter needs 2 pairs). I mean I put the calculation parts, such as b0*x[n]+b1*x[n-1]+b2*x[n-2] inside the SCTL ( the operations of multiplication and add). Instead of putting 3 filters numeric operation part in one SCTL. will we reduce the resources used if we use 1 SCTL to do the operation parts of 1 filter? As the code inside each SCTL will be reduced.
Maybe you tell me if such approach will reduce the reosources used? Or there is no difference?
Thank you! -
A few questions about MacBooks and Parallels Desktop.
I have a few questions about MacBooks and Parallels Desktop.
1) I understand I need at least 1GB of RAM to run Parallels Desktop but what about the hard drive, is the stock 60GB drive big enough?
2) Related to question 1, even if it was big enough to allow me to install and run Windows would the 60GB drive be enough if I wanted to install a Linux distribution as well?
3) This has nothing to do with Parallels Desktop but thought I'd ask it here anyway, do Apple Stores carry just the stock MacBooks, or do they carry other configurations?
Thanks
Keith1. Depend on how intensive you use that HD for saving data on both Mac OS and XP. For standard installation on both OS X and XP, the space of 60 Gb is enough.
2. Same answer as no 1. You can install all three on that HD space, but the extra spacce available will be less and less for your data. You can save your data on external or back up on cd/dvd and erase it from the HD to keep the free space.
Remember to leave at least 2 or 3 Gb for virtual memory usage.
3. Just call them, maybe they don't have it in store stock, but by appointment they might configure one for you before your pick-up date.
Good Luck -
Effects of PARALLEL hint in different parts of SQL script
I have a fairly large data warehouse with most of the Child tables having more than 5 billion rows. They are partitioned by a DATE column.
I have several local B-tree and bitmap indexes on the appropriate columns.
The DEGREE on most of the large tables is currently set to 4.
AUTO_DOP is not currently being used.
The DW runs on a 64 processor server with 64GB memory and 16k block size.
Database is Oracle 11.2.0.2 EE on Solaris
I have several queries that extract data into separate reference tables that are totally refreshed on typically a monthly basis although a few are daily.
The queries use an INSERT /*+ APPEND */ and usually are a JOIN of 3-4 tables and/or in-line views with a GROUP BY on 1-2 columns.
I'm trying to find the best place to specify a PARALLEL hint or optionally force parallel DML in the session and set the degree via the 'ALTER SESSION ...' statement.
Here are the options I'm looking at: (others are greatly appreciated!)
1) use the /*+ PARALLEL x */ hint only on the topmost SELECT
2) specify the /*+ PARALLEL x */ hint on each separate SELECT in all the sub-queries and in-line views where it's deemed useful (i.e. not on very small tables).
3) use an 'ALTER SESSION FORCE PARALLEL DML PARALLEL x;'
Questions:
a) by using option (1) will the DOP in that topmost SELECT be used for all subsequent SELECT statements below it?
b) same question for using option (3)?
c) how can I monitor the activity and verify what DOP is used in the different query sections? I have tried to follow the script execution in TOAD but haven't had much luck.
Thanks very much for your help and please let me know if you need any more information.
-garygarywicke wrote:
Thanks for the feedback! Very useful!
A couple of follow-ups.
My SQL statement structure is basically like this:
INSERT /*+ APPEND PARALLEL (8) */ INTO <table 1> (COL1, COL2, COL3)
SELECT /*+ PARALLEL (24) */ COL1, COL2, COL3
FROM <table 2>
WHERE ...
GROUP BY ...
I just checked the docs. The syntax for the parallel hint listed for 10gR1 (could have changed for 11gR2 - go ahead and see for yourself)
parallel(table_spec[degree]expecting a table designation and an optional degree. If your syntax is unrecognized the hint will be ignored
It is possible the syntax you are using is valid, being listed elsewhere than the hint definitions. Some hints aren't listed where hints are defined but in the data warehousing guide and so are semi-documented. I have not seen that usage myself.
>
After reading about 'ALTER SESSION ENABLE PARALLEL DML', I can see how to set it but I can't see how to verify it's current state (enabled or disabled). I looked at several system parameters but it wasn't obvious to me. I did see some PARALLEL_MAX_SERVERS (100) and PARALLEL_MIN_SERVERS (0) but didn't know if that applied to any and all SQL statements, including DML which is what I'm doing.I don't know how to tell if parallel DML is enabled with ALTER SESSION ENABLE PARALLEL DML except by turning it on. using SHOW PARAMETERS PARALLEL in SQL*PLUS did not tell me anything when I just tried it. You should be able to turn it on and get an indication from an execution plan that it is being used. -
Parallel hint causes a query to run slower?
I have an insert...select query where the select is a join between a table with one billion rows (although there is a where clause on an indexed column that restricts it to "only" 300 million), a table with 30 million rows, and a table with about 100,000 rows, where the result is about 20 rows. When I first ran it, it took about 2 hours. I added a Parallel hint, and explain plan showed that it was being used (and v$session showed that I had about 30 additional connections while it ran). but not it takes four hours.
Is there a reason parallel processing would cause a query to run slower?
insert /*+ append */ into employees_by_age_group
pay_plan
, age_range
, pay_level
, fy
, employee_count
select /*+ parallel */
emp.pay_plan
, to_char(d.min_age) || '-' || to_char(d.max_age) as age_range
, emp.pay_level
, pay.fy
, count(pay.employee_id) as employee_count
from
select /*+ index(pay_info pay_info_index_on_site) */
employee_id
, extract(year from (dte_ppe_end + 92)) as fy
, count(employee_id) as num_recs
from pay_info
where extract(month from dte_ppe_end) = 10
and extract(day from dte_ppe_end) between 14 and 27
and substr(pay_type, 1, 1) IN ('A', 'B', 'C')
and site like 'Z%'
group by employee_id, extract(year from (dte_ppe_end + 92))
) pay
join
select employee_id
, pay_plan
, pay_grade
, pay_step
, file_date
from
select /*+ index(employee_info employee_info_index_on_site) */
employee_id
, pay_level
, file_date
, max(file_date)
over (partition by extract(year from (file_date + 61)))
as last_file_date
from employee_info
where site like 'Z%'
where file_date = last_file_date
) emp
on (
emp.employee_id = pay.employee_id
and extract(year from emp.file_date) = pay.fy - 1
join (
select employee_id
, dob
from (
select employee_id
, date_birth
, row_number() over (partition by employee_id order by date_file desc) as r
from employee_birthdates
where site like 'Z%'
where r = 1
) dob
on dob.employee_id = pay.employee_id
join
select 20 as min_age, 24 as max_age from dual
union all select 25 as min_age, 29 as max_age from dual
union all select 30 as min_age, 34 as max_age from dual
union all select 35 as min_age, 39 as max_age from dual
union all select 40 as min_age, 44 as max_age from dual
union all select 45 as min_age, 49 as max_age from dual
union all select 50 as min_age, 54 as max_age from dual
union all select 55 as min_age, 59 as max_age from dual
union all select 60 as min_age, 64 as max_age from dual
union all select 65 as min_age, 69 as max_age from dual
union all select 70 as min_age, 74 as max_age from dual
union all select 75 as min_age, 79 as max_age from dual
union all select 80 as min_age, 84 as max_age from dual
union all select 85 as min_age, 89 as max_age from dual
union all select 90 as min_age, 94 as max_age from dual
union all select 95 as min_age, 99 as max_age from dual
) d
group by emp.pay_plan, d.min_age, d.max_age, emp.pay_level, pay.fy;Paul - here are three different explain plans
First, the original one (without the parallel hint):
INSERT STATEMENT ALL_ROWS Cost: 26,684,255 Bytes: 114 Cardinality: 1
35 LOAD AS SELECT EMPLOYEES_BY_AGE_GROUP
34 HASH GROUP BY Cost: 26,684,255 Bytes: 114 Cardinality: 1
33 NESTED LOOPS Cost: 26,684,254 Bytes: 114 Cardinality: 1
14 HASH JOIN Cost: 26,684,222 Bytes: 108 Cardinality: 1
9 MERGE JOIN Cost: 4,408,803 Bytes: 8,322 Cardinality: 146
3 VIEW DONBOT_DBA. Cost: 114,863 Bytes: 29,625,180 Cardinality: 987,506
2 WINDOW SORT PUSHED RANK Cost: 114,863 Bytes: 35,550,216 Cardinality: 987,506
1 TABLE ACCESS FULL TABLE EMPLOYEE_BIRTHDATES Cost: 108,983 Bytes: 35,550,216 Cardinality: 987,506
8 SORT JOIN Cost: 4,293,940 Bytes: 3,645 Cardinality: 135
7 VIEW DONBOT_DBA. Cost: 4,293,939 Bytes: 3,645 Cardinality: 135
6 SORT GROUP BY Cost: 4,293,939 Bytes: 4,185 Cardinality: 135
5 TABLE ACCESS BY INDEX ROWID TABLE PAY_INFO Cost: 4,293,938 Bytes: 4,185 Cardinality: 135
4 INDEX RANGE SCAN INDEX PAY_INFO_INDEX_ON_SITE Cost: 487,124 Cardinality: 402,683,034
13 VIEW DONBOT_DBA Cost: 22,275,300 Bytes: 1,160,143,257 Cardinality: 22,747,907
12 WINDOW SORT Cost: 22,275,300 Bytes: 841,672,559 Cardinality: 22,747,907
11 TABLE ACCESS BY INDEX ROWID TABLE EMPLOYEE_INFO Cost: 22,137,046 Bytes: 841,672,559 Cardinality: 22,747,907
10 INDEX RANGE SCAN INDEX EMPLOYEE_INFO_INDEX_ON_SITE Cost: 50,419 Cardinality: 38,019,281
32 VIEW DONBOT_DBA
31 UNION-ALL
15 FAST DUAL Cost: 2 Cardinality: 1
16 FAST DUAL Cost: 2 Cardinality: 1
17 FAST DUAL Cost: 2 Cardinality: 1
18 FAST DUAL Cost: 2 Cardinality: 1
19 FAST DUAL Cost: 2 Cardinality: 1
20 FAST DUAL Cost: 2 Cardinality: 1
21 FAST DUAL Cost: 2 Cardinality: 1
22 FAST DUAL Cost: 2 Cardinality: 1
23 FAST DUAL Cost: 2 Cardinality: 1
24 FAST DUAL Cost: 2 Cardinality: 1
25 FAST DUAL Cost: 2 Cardinality: 1
26 FAST DUAL Cost: 2 Cardinality: 1
27 FAST DUAL Cost: 2 Cardinality: 1
28 FAST DUAL Cost: 2 Cardinality: 1
29 FAST DUAL Cost: 2 Cardinality: 1
30 FAST DUAL Cost: 2 Cardinality: 1 Next, one with the parallel hint:
INSERT STATEMENT ALL_ROWS Cost: 26,507,111 Bytes: 114 Cardinality: 1
51 LOAD AS SELECT EMPLOYEES_BY_AGE_GROUP
50 PX COORDINATOR
49 PX SEND QC (RANDOM) PARALLEL_TO_SERIAL SYS.:TQ10005 :Q1005 Cost: 26,507,111 Bytes: 114 Cardinality: 1
48 HASH GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1005 Cost: 26,507,111 Bytes: 114 Cardinality: 1
47 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1005 Cost: 26,507,111 Bytes: 114 Cardinality: 1
46 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10004 :Q1004 Cost: 26,507,111 Bytes: 114 Cardinality: 1
45 HASH GROUP BY PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 26,507,111 Bytes: 114 Cardinality: 1
44 NESTED LOOPS PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 26,507,111 Bytes: 114 Cardinality: 1
25 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 26,507,109 Bytes: 108 Cardinality: 1
17 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 4,301,500 Bytes: 4,104 Cardinality: 72
16 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10003 :Q1003 Cost: 4,301,500 Bytes: 4,104 Cardinality: 72
15 HASH JOIN PARALLEL_COMBINED_WITH_PARENT :Q1003 Cost: 4,301,500 Bytes: 4,104 Cardinality: 72
7 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1003
6 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1003 Cost: 4,293,939 Bytes: 1,809 Cardinality: 67
5 PX SEND BROADCAST PARALLEL_FROM_SERIAL SYS.:TQ10000 Cost: 4,293,939 Bytes: 1,809 Cardinality: 67
4 VIEW DONBOT_DBA. Cost: 4,293,939 Bytes: 1,809 Cardinality: 67
3 SORT GROUP BY Cost: 4,293,939 Bytes: 2,077 Cardinality: 67
2 TABLE ACCESS BY INDEX ROWID TABLE PAY_INFO Cost: 4,293,938 Bytes: 2,077 Cardinality: 67
1 INDEX RANGE SCAN INDEX PAY_INFO_INDEX_ON_SITE Cost: 487,124 Cardinality: 199,756,151
14 VIEW PARALLEL_COMBINED_WITH_PARENT DONBOT_DBA. :Q1003 Cost: 7,561 Bytes: 29,625,180 Cardinality: 987,506
13 WINDOW SORT PUSHED RANK PARALLEL_COMBINED_WITH_PARENT :Q1003 Cost: 7,561 Bytes: 35,550,216 Cardinality: 987,506
12 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1003 Cost: 7,561 Bytes: 35,550,216 Cardinality: 987,506
11 PX SEND HASH PARALLEL_TO_PARALLEL SYS.:TQ10002 :Q1002 Cost: 7,561 Bytes: 35,550,216 Cardinality: 987,506
10 WINDOW CHILD PUSHED RANK PARALLEL_COMBINED_WITH_PARENT :Q1002 Cost: 7,561 Bytes: 35,550,216 Cardinality: 987,506
9 PX BLOCK ITERATOR PARALLEL_COMBINED_WITH_CHILD :Q1002 Cost: 7,557 Bytes: 35,550,216 Cardinality: 987,506
8 TABLE ACCESS FULL TABLE PARALLEL_COMBINED_WITH_PARENT EMPLOYEE_BIRTHDATES :Q1002 Cost: 7,557 Bytes: 35,550,216 Cardinality: 987,506
24 BUFFER SORT PARALLEL_COMBINED_WITH_CHILD :Q1004
23 PX RECEIVE PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 22,205,605 Bytes: 575,504,145 Cardinality: 11,284,395
22 PX SEND HASH PARALLEL_FROM_SERIAL SYS.:TQ10001 Cost: 22,205,605 Bytes: 575,504,145 Cardinality: 11,284,395
21 VIEW DONBOT_DBA. Cost: 22,205,605 Bytes: 575,504,145 Cardinality: 11,284,395
20 WINDOW SORT Cost: 22,205,605 Bytes: 417,522,615 Cardinality: 11,284,395
19 TABLE ACCESS BY INDEX ROWID TABLE EMPLOYEE_INFO Cost: 22,137,046 Bytes: 417,522,615 Cardinality: 11,284,395
18 INDEX RANGE SCAN INDEX EMPLOYEE_INFO_INDEX_ON_SITE Cost: 50,419 Cardinality: 18,859,958
43 VIEW PARALLEL_COMBINED_WITH_PARENT DONBOT_DBA. :Q1004 Cost: 32 Bytes: 6 Cardinality: 1
42 UNION-ALL PARALLEL_COMBINED_WITH_PARENT :Q1004
26 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
27 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
28 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
29 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
30 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
31 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
32 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
33 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
34 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
35 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
36 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
37 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
38 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
39 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
40 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1
41 FAST DUAL PARALLEL_COMBINED_WITH_PARENT :Q1004 Cost: 2 Cardinality: 1 Finally, one without the parallel hint, and without the index hint on PAY_TABLE:
INSERT STATEMENT ALL_ROWS Cost: 23,348,654 Bytes: 114 Cardinality: 1
34 LOAD AS SELECT ARMYMP.EMPLOYEES_BY_AGE
33 HASH GROUP BY Cost: 23,348,654 Bytes: 114 Cardinality: 1
32 NESTED LOOPS Cost: 23,348,653 Bytes: 114 Cardinality: 1
13 HASH JOIN Cost: 23,348,621 Bytes: 108 Cardinality: 1
8 MERGE JOIN Cost: 1,073,202 Bytes: 8,322 Cardinality: 146
3 VIEW DONBOT_DBA. Cost: 114,863 Bytes: 29,625,180 Cardinality: 987,506
2 WINDOW SORT PUSHED RANK Cost: 114,863 Bytes: 35,550,216 Cardinality: 987,506
1 TABLE ACCESS FULL TABLE EMPLOYEE_BIRTHDATES Cost: 108,983 Bytes: 35,550,216 Cardinality: 987,506
7 SORT JOIN Cost: 958,339 Bytes: 3,645 Cardinality: 135
6 VIEW DONBOT_DBA. Cost: 958,338 Bytes: 3,645 Cardinality: 135
5 SORT GROUP BY Cost: 958,338 Bytes: 4,185 Cardinality: 135
4 TABLE ACCESS FULL TABLE PAY_INFO Cost: 958,337 Bytes: 4,185 Cardinality: 135
12 VIEW DONBOT_DBA. Cost: 22,275,300 Bytes: 1,160,143,257 Cardinality: 22,747,907
11 WINDOW SORT Cost: 22,275,300 Bytes: 841,672,559 Cardinality: 22,747,907
10 TABLE ACCESS BY INDEX ROWID TABLE EMPLOYEE_INFO Cost: 22,137,046 Bytes: 841,672,559 Cardinality: 22,747,907
9 INDEX RANGE SCAN INDEX EMPLOYEE_INFO_UIC Cost: 50,419 Cardinality: 38,019,281
31 VIEW DONBOT_DBA. Cost: 32 Bytes: 6 Cardinality: 1
30 UNION-ALL
14 FAST DUAL Cost: 2 Cardinality: 1
15 FAST DUAL Cost: 2 Cardinality: 1
16 FAST DUAL Cost: 2 Cardinality: 1
17 FAST DUAL Cost: 2 Cardinality: 1
18 FAST DUAL Cost: 2 Cardinality: 1
19 FAST DUAL Cost: 2 Cardinality: 1
20 FAST DUAL Cost: 2 Cardinality: 1
21 FAST DUAL Cost: 2 Cardinality: 1
22 FAST DUAL Cost: 2 Cardinality: 1
23 FAST DUAL Cost: 2 Cardinality: 1
24 FAST DUAL Cost: 2 Cardinality: 1
25 FAST DUAL Cost: 2 Cardinality: 1
26 FAST DUAL Cost: 2 Cardinality: 1
27 FAST DUAL Cost: 2 Cardinality: 1
28 FAST DUAL Cost: 2 Cardinality: 1
29 FAST DUAL Cost: 2 Cardinality: 1 I am surprised the cost without the index is less than the cost with it, considering that it is replacing a Table Access By Index Rowid with a Table Access Full on a table with 1 billion (1000 million) records.
Igor - two questions:
One - I cannot find "Materialize" in the hints in the SQL Reference anywhere. What does it do?
Two - does replacing subqueries with With clauses make that much of a difference? -
Two questions about Risk Management 2.0
hi experts,
Please find below two questions about Risk Management:
-In SPRO, Risk Management>Create top node: after completing information and executing I have this error:
Error in the ABAP Application Program
The current ABAP program "/ORM/ORM_CREATE_TOP_NODES" had to be terminated
because it has
come across a statement that unfortunately cannot be executed.
The following syntax error occurred in program "/ORM/SAPLORM_API_SERVICES " in
include "/ORM/LORM_API_SERVICESU10 " in
line 97:
"Bei PERFORM bzw. CALL FUNCTION "GET_ORGUNIT_THRESHOLDS" ist der Aktual"
"parameter "I_ORGUNIT_ID" zum Formalparameter "IV_ORGUNIT_ID" inkompati"
"bel."
The include has been created and last changed by:
Created by: "SAP "
Last changed by: "SAP "
Error in the ABAP Application Program
The current ABAP program "/ORM/ORM_CREATE_TOP_NODES" had to be terminated
because it has
come across a statement that unfortunately cannot be executed.
Do you know where it could come from?
-On the Portal>Risk Management
when I click in a link under the risk management menu(activities and risks, risk report, document risk,...) i alway have an internal server error:
While processing the current request, an exception occured which could not be handled by the application or the framework.
If the information contained on this page doesn't help you to find and correct the cause of the problem, please contact your system administrator. To facilitate analysis of the problem, keep a copy of this error page. Hint: Most browsers allow to select all content, copy it and then paste it into an empty document (e.g. email or simple text file).
Do we have to set up some customizing points before accessing these links?
Thank you !
Regards,
JulienHi Julien ,
I have the same error what u described as :-
-On the Portal>Risk Management
when I click in a link under the risk management menu(activities and risks, risk report, document risk,...) i alway have an internal server error:
While processing the current request, an exception occured which could not be handled by the application or the framework.
If the information contained on this page doesn't help you to find and correct the cause of the problem, please contact your system administrator. To facilitate analysis of the problem, keep a copy of this error page. Hint: Most browsers allow to select all content, copy it and then paste it into an empty document (e.g. email or simple text file).
Do we have to set up some customizing points before accessing these links? "
Are you able to solve this. Please let me know how to resolve this???
Thanks
Regards,
Atul -
Questions about PDF exporting with InDe CS5.5
Hey all,
A couple questions about exporting to PDF from the latest version of InDe.
First, I have noticed that it seems to take a lot longer to get to a PDF. Any suggestions for how to speed up the process? It took 8 minutes or so to generate a low-res PDF (for print) of a 24pp document with a few placed images and vector graphics. Wow, that's a long time to wait, especially for a proof.
Second, the background task... if I get it going on making that 8-minute PDF and then work some more on the document, what exactly is in the PDF? Usually I save before making a PDF or printing. So, is the last version saved what will be in the PDF?
(As an aside, this ability to work on the doc while generating a PDF seems kind of weird. Generally one makes a PDF for proofing, or even for printing, when all the changes have been made and everything is "final". So, I see no benefit to being able to work on my document while it's making a PDF, as I'm probably finished making revisions for the time being. I have to say that I kind of like the progress bar you get when you make an interactive PDF, as you know you can't work on the document when that's on the screen... )
Thanks as always.First, I have noticed that it seems to take a lot longer to get to a PDF. Any suggestions for how to speed up the process? It took 8 minutes or so to generate a low-res PDF (for print) of a 24pp document with a few placed images and vector graphics. Wow, that's a long time to wait, especially for a proof.
Yes, this is abnormally long (and too long), something is wrong. What's the full version of InDesign you are running, as reported by holding down Cmd or Control and selecting About InDesign?
Second, the background task... if I get it going on making that 8-minute PDF and then work some more on the document, what exactly is in the PDF? Usually I save before making a PDF or printing. So, is the last version saved what will be in the PDF?
Saving is not related. InDesign makes a database snapshot of your document the moment you begin the PDF export, and makes the export relative to that snapshot, regardless of edits you continue to make during the export process, and regardless of saving. Of course saving first is a good idea, for several reasons, not the least of which it sounds like something's fairly seriously wrong with your document or your InDesign installation.
We recommend you trash your preferences and export your document to IDML and see if either of those things changes this 8-minute behavior...err, assuming you're running 7.5.2.318.
(As an aside, this ability to work on the doc while generating a PDF seems kind of weird. Generally one makes a PDF for proofing, or even for printing, when all the changes have been made and everything is "final". So, I see no benefit to being able to work on my document while it's making a PDF, as I'm probably finished making revisions for the time being. I have to say that I kind of like the progress bar you get when you make an interactive PDF, as you know you can't work on the document when that's on the screen... )
Yeah, I think the primary benefit is if you are likely to work on 2 or more files in parallel, so you can finish A and export A and then switch to B. If you'd like a dialog box to pop up when export is done, check out my exportPop script from this post: ANN: automatic dialog after background export (exportPop.jsx. -
Hello,
I want to get into SAP BI and have some questions about it.
Is there a big downside in installing only the BI components without the Java BI components, or is it even possible if you want to work with BI. What are the general differences between BI and Java BI?
I think the main aspects i will work with will be data warehousing and some basic reporting.
The reason why i'm asking is that i want to run the BI components in the same system as the ECC Server (its just a test system). I read that the Java BI component needs EPC and EP and that would probably be too much for a single system (6 GB RAM).
Thanks a lot in advance,
MartinThanks for your hints, they were really helpful.
As i mentioned i mainly want to work with the general Data Warehouse and Reporting capabilities, so i think it should be okay.
Actually i'm planing to write about BI and SAP within my master thesis.
Another question, what will be with SAP BI in the future regarding the Business Object acquisition? Is there still any point in learning SAP BI or will SAP BI disapear in the future? I heard that the basic BI functionality (e.g. Datawarehouse ) will still be used but the reporting capabilities would be disappear. I'm not sure if that is right and i'm really new to this topic. So any comments are welcome.
Thanks,
Martin -
Questions about Trex migration
Hello experts,
I have some questions about Trex migration.
We have to migrate our Trex instances to another Hardware type (and operating system type).
Is it somehow possible to export the Trex database (indexes etc.) on the source Trex, and then import them on the new target Trex? I ask because i found another thread in here saying that this is only possible if the source and target OS is the same, and if the source and target Trex is the same version, I donu2019t know if this is true though?
Or is it easier to just install a new Trex, and then let it rebuild index etc.? And is this a feasible way to do it?
As you can see i don't know much about Trex, so any hints and good advice is appreciated.
Thanks in advance.
Regards,
KennethHi Kenneth,
there are different approachs for migartion TREX.
a) Install TREX on a new machine. And export the indexes and afterwards import the indexes. This is possible even you do not have the same TREX version. Because otherwise if you only want to upgrade your revision you just start the sapinst or install.sh at the same machine.
b) Intstall TREX on a new machine and re-index all.
But keep in mind TREX is not BIA. Even perhapts this is the same SAP software component.
So re-index could take about days.... not only 10 min. This depends on the type of objekts which should be indexed and of the volumne.
As well it is not recommanded in general to switch on delta. For BIA of cause but not in generel for TREX.
Best regards
Frank
Maybe you are looking for
-
Firefox won't start, but is running in Task Manager
I was running 3.x for a long time with no troubles. Last week it crashed, and since then I have been unable to start Firefox although it is showing as running in the Task Manager when I attempt to start it. So it opens no window, but appears to be ru
-
How to print Barcode in te SAP SCRIPTS?
Hi All, Can anyone of you let me know how to print the Barcode in the SAP SCRIPT? This is the first time I'm working on the Barcode. I've to print many fields data into one barcode. Like PO number, Material Number, Plant, Company code..etc.. Best Reg
-
Out of memory - Cannot alocate 2gb memory to SGA - SUsE 9 / 10gR2
I need help, please !!!! Cannot alocate > 2gb memory to SGA SHMMAX: SUsE:/home/oracle # /sbin/sysctl -p kernel.shmall = 2097152 kernel.shmmax = 3221225472 kernel.shmmni = 4096 kernel.sem = 250 32000 100 128 fs.file-max = 65536 net.ipv4.ip_local_port_
-
Drop Down button multi function
Hi all, i need to implement a usual button with a drop down button , like back button on Internet explorer and a drop down button. i did it in the following way: 1) i create normal button 2) add it to my toolbar 3) create a popupmenu 4) add a few men
-
Hello everyone, I'm trying to connect Oracle BPA to Sharepoint using several DMS providers. I managed to connect to Sharepoint using Entropysoft's adapter but I can't connect the adapter to BPA. What says in the documentation doesn't work. I even tri