Why the 'LIKE' operator takes so much time to run?
I have a table T with 3 columns and 3 indexes:
CREATE TABLE T
id VARCHAR2(38) NOT NULL,
fid VARCHAR2(38) NOT NULL,
val NVARCHAR2(2000) NOT NULL
ALTER TABLE T ADD (CONSTRAINT pk_t PRIMARY KEY (id,fid));
CREATE INDEX t_fid ON T(fid);
CREATE INDEX t_val ON T(val);
Then I have the following two queries which differ in only one place - the 1st one uses the '=' operator whereas the 2nd uses 'LIKE'. Both queries have the identical execution plan and return one identical row. However, the 1st query takes almost 0 second to execute, and the 2nd one takes more than 12 seconds, on a pretty beefy machine. I had played with the target text, like placing '%' here and/or there, and observed the similar timing every time.
So I am wondering what I should change to make the 'LIKE' operator run as fast as the '=' operator. I know CONTEXT/CATALOG index is a viable approach, but I am just trying to find out if there is a simpler alternative, such as a better use of the index t_val.
1) Query with '=' operator
SELECT id
FROM T
WHERE fid = '{999AE6E4-1ED9-459B-9BB0-45C913668C8C}'
AND val = '3504038055275883124';
2) Query with 'LIKE' operator
SELECT id
FROM T
WHERE fid = '{999AE6E4-1ED9-459B-9BB0-45C913668C8C}'
AND val LIKE '3504038055275883124';
Execution Plan
0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=1 Card=1 Bytes=99)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'T' (Cost=1 Card=1 Bytes=99)
2 1 INDEX (RANGE SCAN) OF 'T_VAL' (NON-UNIQUE) (Cost=4 Card=12)
I will for sure try to change the order of the PK and see whether there will be any impact to the performance.
In our application, val is much closer to a unique value than fid. In the example query, the execution plan showed that the index on val was indeed used in the execution of the query. That's why the 1st query took almost no time to return (our table T has more than 6 million rows).
I was hoping the 'LIKE' operator would utilize the t_val index effectively and provide similar performance to the '=' operator. But apparently that's not the case, or needs some tricks.
Similar Messages
-
How to find lock and more information why the conc prg taking to much time
Oracle Apps R12- Orader Management
Hi All
I am new to Oracle Apps,We i run my concurrent Program
its take too much time to complete.How to find that there is any lock occur in table while the concurrent program is running, any other way to find why the concurrent program takes too much of time to execute and how can we speed up the process.
Yesterday day we makes some change to speed up the process with our DBA to speed up the concurrent program
The first concurrent program takes 2.30 hours and after these changes its take 1.45 hours.Please suggest it is correct
Logged with System Administrator Responsibility.
Manager – Define – Standard Manager.
Set the PROCESSES value as 15 (changed from 10).
Set the SLEEP SECONDS value as 10 (Changed from 30).
Manager – Define – Standard Manager2.
Set the PROCESSES value as 15 (changed from 10).
Set the SLEEP SECONDS value as 10 (Changed from 60).
Any suggestion on how find why it takes too much of time.
Any help is highly appricatable
Thanks & Regards
Srikkanth.MHi;
Please check below notes:
FAQ: Common Tracing Techniques within the Oracle Applications 11i/R12 [ID 296559.1]
FAQ - How to Use Debug Tools and Scripts for the APS Suite [ID 245974.1]
Turning Debug Mode On [ID 148140.1]
How to Obtain Debug Log in R12 [ID 787727.1]
How to enable and retrieve debug log messages [ID 433199.1]
Running a Trace on a Form [ID 148143.1]
Running a Trace on a Process [ID 148145.1]
Regard
Helios -
SELECT query takes too much time! Y?
Plz find my SELECT query below:
select w~mandt
wvbeln wposnr wmeins wmatnr wwerks wnetwr
wkwmeng wvrkme wmatwa wcharg w~pstyv
wposar wprodh wgrkor wantlf wkztlf wlprio
wvstel wroute wumvkz wumvkn wabgru wuntto
wawahr werdat werzet wfixmg wprctr wvpmat
wvpwrk wmvgr1 wmvgr2 wmvgr3 wmvgr4 wmvgr5
wbedae wcuobj w~mtvfp
xetenr xwmeng xbmeng xettyp xwepos xabart
x~edatu
xtddat xmbdat xlddat xwadat xabruf xetart
x~ezeit
into table t_vbap
from vbap as w
inner join vbep as x on xvbeln = wvbeln and
xposnr = wposnr and
xmandt = wmandt
where
( ( werdat > pre_dat ) and ( werdat <= w_date ) ) and
( ( ( erdat > pre_dat and erdat < p_syndt ) or
( erdat = p_syndt and erzet <= p_syntm ) ) ) and
w~matnr in s_matnr and
w~pstyv in s_itmcat and
w~lfrel in s_lfrel and
w~abgru = ' ' and
w~kwmeng > 0 and
w~mtvfp in w_mtvfp and
x~ettyp in w_ettyp and
x~bdart in s_req_tp and
x~plart in s_pln_tp and
x~etart in s_etart and
x~abart in s_abart and
( ( xlifsp in s_lifsp ) or ( xlifsp = ' ' ) ).
The problem: It takes too much time while executing this statement.
Could anybody change this statement and help me out to reduce the DB Access time?
ThxWays of Performance Tuning
1. Selection Criteria
2. Select Statements
Select Queries
SQL Interface
Aggregate Functions
For all Entries
Select Over more than one internal table
Selection Criteria
1. Restrict the data to the selection criteria itself, rather than filtering it out using the ABAP code using CHECK statement.
2. Select with selection list.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
Select Statements Select Queries
1. Avoid nested selects
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
Note: A simple SELECT loop is a single database access whose result is passed to the ABAP program line by line. Nested SELECT loops mean that the number of accesses in the inner loop is multiplied by the number of accesses in the outer loop. One should therefore use nested SELECT loops only if the selection in the outer loop contains very few lines or the outer loop is a SELECT SINGLE statement.
2. Select all the records in a single shot using into table clause of select statement rather than to use Append statements.
SELECT * FROM SBOOK INTO SBOOK_WA.
CHECK: SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
ENDSELECT.
The above code can be much more optimized by the code written below which avoids CHECK, selects with selection list and puts the data in one shot using into table
SELECT CARRID CONNID FLDATE BOOKID FROM SBOOK INTO TABLE T_SBOOK
WHERE SBOOK_WA-CARRID = 'LH' AND
SBOOK_WA-CONNID = '0400'.
3. When a base table has multiple indices, the where clause should be in the order of the index, either a primary or a secondary index.
To choose an index, the optimizer checks the field names specified in the where clause and then uses an index that has the same order of the fields. In certain scenarios, it is advisable to check whether a new index can speed up the performance of a program. This will come handy in programs that access data from the finance tables.
4. For testing existence, use Select.. Up to 1 rows statement instead of a Select-Endselect-loop with an Exit.
SELECT * FROM SBOOK INTO SBOOK_WA
UP TO 1 ROWS
WHERE CARRID = 'LH'.
ENDSELECT.
The above code is more optimized as compared to the code mentioned below for testing existence of a record.
SELECT * FROM SBOOK INTO SBOOK_WA
WHERE CARRID = 'LH'.
EXIT.
ENDSELECT.
5. Use Select Single if all primary key fields are supplied in the Where condition .
If all primary key fields are supplied in the Where conditions you can even use Select Single.
Select Single requires one communication with the database system, whereas Select-Endselect needs two.
Select Statements SQL Interface
1. Use column updates instead of single-row updates
to update your database tables.
SELECT * FROM SFLIGHT INTO SFLIGHT_WA.
SFLIGHT_WA-SEATSOCC =
SFLIGHT_WA-SEATSOCC - 1.
UPDATE SFLIGHT FROM SFLIGHT_WA.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
UPDATE SFLIGHT
SET SEATSOCC = SEATSOCC - 1.
2. For all frequently used Select statements, try to use an index.
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
The above mentioned code can be more optimized by using the following code
SELECT * FROM SBOOK CLIENT SPECIFIED INTO SBOOK_WA
WHERE MANDT IN ( SELECT MANDT FROM T000 )
AND CARRID = 'LH'
AND CONNID = '0400'.
ENDSELECT.
3. Using buffered tables improves the performance considerably.
Bypassing the buffer increases the network considerably
SELECT SINGLE * FROM T100 INTO T100_WA
BYPASSING BUFFER
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
The above mentioned code can be more optimized by using the following code
SELECT SINGLE * FROM T100 INTO T100_WA
WHERE SPRSL = 'D'
AND ARBGB = '00'
AND MSGNR = '999'.
Select Statements Aggregate Functions
If you want to find the maximum, minimum, sum and average value or the count of a database column, use a select list with aggregate functions instead of computing the aggregates yourself.
Some of the Aggregate functions allowed in SAP are MAX, MIN, AVG, SUM, COUNT, COUNT( * )
Consider the following extract.
Maxno = 0.
Select * from zflight where airln = LF and cntry = IN.
Check zflight-fligh > maxno.
Maxno = zflight-fligh.
Endselect.
The above mentioned code can be much more optimized by using the following code.
Select max( fligh ) from zflight into maxno where airln = LF and cntry = IN.
Select Statements For All Entries
The for all entries creates a where clause, where all the entries in the driver table are combined with OR. If the number of entries in the driver table is larger than rsdb/max_blocking_factor, several similar SQL statements are executed to limit the length of the WHERE clause.
The plus
Large amount of data
Mixing processing and reading of data
Fast internal reprocessing of data
Fast
The Minus
Difficult to program/understand
Memory could be critical (use FREE or PACKAGE size)
Points to be must considered FOR ALL ENTRIES
Check that data is present in the driver table
Sorting the driver table
Removing duplicates from the driver table
Consider the following piece of extract
Loop at int_cntry.
Select single * from zfligh into int_fligh
where cntry = int_cntry-cntry.
Append int_fligh.
Endloop.
The above mentioned can be more optimized by using the following code.
Sort int_cntry by cntry.
Delete adjacent duplicates from int_cntry.
If NOT int_cntry[] is INITIAL.
Select * from zfligh appending table int_fligh
For all entries in int_cntry
Where cntry = int_cntry-cntry.
Endif.
Select Statements Select Over more than one Internal table
1. Its better to use a views instead of nested Select statements.
SELECT * FROM DD01L INTO DD01L_WA
WHERE DOMNAME LIKE 'CHAR%'
AND AS4LOCAL = 'A'.
SELECT SINGLE * FROM DD01T INTO DD01T_WA
WHERE DOMNAME = DD01L_WA-DOMNAME
AND AS4LOCAL = 'A'
AND AS4VERS = DD01L_WA-AS4VERS
AND DDLANGUAGE = SY-LANGU.
ENDSELECT.
The above code can be more optimized by extracting all the data from view DD01V_WA
SELECT * FROM DD01V INTO DD01V_WA
WHERE DOMNAME LIKE 'CHAR%'
AND DDLANGUAGE = SY-LANGU.
ENDSELECT
2. To read data from several logically connected tables use a join instead of nested Select statements. Joins are preferred only if all the primary key are available in WHERE clause for the tables that are joined. If the primary keys are not provided in join the Joining of tables itself takes time.
SELECT * FROM EKKO INTO EKKO_WA.
SELECT * FROM EKAN INTO EKAN_WA
WHERE EBELN = EKKO_WA-EBELN.
ENDSELECT.
ENDSELECT.
The above code can be much more optimized by the code written below.
SELECT PF1 PF2 FF3 FF4 INTO TABLE ITAB
FROM EKKO AS P INNER JOIN EKAN AS F
ON PEBELN = FEBELN.
3. Instead of using nested Select loops it is often better to use subqueries.
SELECT * FROM SPFLI
INTO TABLE T_SPFLI
WHERE CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK'.
SELECT * FROM SFLIGHT AS F
INTO SFLIGHT_WA
FOR ALL ENTRIES IN T_SPFLI
WHERE SEATSOCC < F~SEATSMAX
AND CARRID = T_SPFLI-CARRID
AND CONNID = T_SPFLI-CONNID
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
The above mentioned code can be even more optimized by using subqueries instead of for all entries.
SELECT * FROM SFLIGHT AS F INTO SFLIGHT_WA
WHERE SEATSOCC < F~SEATSMAX
AND EXISTS ( SELECT * FROM SPFLI
WHERE CARRID = F~CARRID
AND CONNID = F~CONNID
AND CITYFROM = 'FRANKFURT'
AND CITYTO = 'NEW YORK' )
AND FLDATE BETWEEN '19990101' AND '19990331'.
ENDSELECT.
1. Table operations should be done using explicit work areas rather than via header lines.
READ TABLE ITAB INTO WA WITH KEY K = 'X BINARY SEARCH.
IS MUCH FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY K = 'X'.
If TAB has n entries, linear search runs in O( n ) time, whereas binary search takes only O( log2( n ) ).
2. Always try to use binary search instead of linear search. But dont forget to sort your internal table before that.
READ TABLE ITAB INTO WA WITH KEY K = 'X'. IS FASTER THAN USING
READ TABLE ITAB INTO WA WITH KEY (NAME) = 'X'.
3. A dynamic key access is slower than a static one, since the key specification must be evaluated at runtime.
4. A binary search using secondary index takes considerably less time.
5. LOOP ... WHERE is faster than LOOP/CHECK because LOOP ... WHERE evaluates the specified condition internally.
LOOP AT ITAB INTO WA WHERE K = 'X'.
ENDLOOP.
The above code is much faster than using
LOOP AT ITAB INTO WA.
CHECK WA-K = 'X'.
ENDLOOP.
6. Modifying selected components using MODIFY itab TRANSPORTING f1 f2.. accelerates the task of updating a line of an internal table.
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1 TRANSPORTING DATE.
The above code is more optimized as compared to
WA-DATE = SY-DATUM.
MODIFY ITAB FROM WA INDEX 1.
7. Accessing the table entries directly in a "LOOP ... ASSIGNING ..." accelerates the task of updating a set of lines of an internal table considerably
Modifying selected components only makes the program faster as compared to Modifying all lines completely.
e.g,
LOOP AT ITAB ASSIGNING <WA>.
I = SY-TABIX MOD 2.
IF I = 0.
<WA>-FLAG = 'X'.
ENDIF.
ENDLOOP.
The above code works faster as compared to
LOOP AT ITAB INTO WA.
I = SY-TABIX MOD 2.
IF I = 0.
WA-FLAG = 'X'.
MODIFY ITAB FROM WA.
ENDIF.
ENDLOOP.
8. If collect semantics is required, it is always better to use to COLLECT rather than READ BINARY and then ADD.
LOOP AT ITAB1 INTO WA1.
READ TABLE ITAB2 INTO WA2 WITH KEY K = WA1-K BINARY SEARCH.
IF SY-SUBRC = 0.
ADD: WA1-VAL1 TO WA2-VAL1,
WA1-VAL2 TO WA2-VAL2.
MODIFY ITAB2 FROM WA2 INDEX SY-TABIX TRANSPORTING VAL1 VAL2.
ELSE.
INSERT WA1 INTO ITAB2 INDEX SY-TABIX.
ENDIF.
ENDLOOP.
The above code uses BINARY SEARCH for collect semantics. READ BINARY runs in O( log2(n) ) time. The above piece of code can be more optimized by
LOOP AT ITAB1 INTO WA.
COLLECT WA INTO ITAB2.
ENDLOOP.
SORT ITAB2 BY K.
COLLECT, however, uses a hash algorithm and is therefore independent
of the number of entries (i.e. O(1)) .
9. "APPEND LINES OF itab1 TO itab2" accelerates the task of appending a table to another table considerably as compared to LOOP-APPEND-ENDLOOP.
APPEND LINES OF ITAB1 TO ITAB2.
This is more optimized as compared to
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
10. DELETE ADJACENT DUPLICATES accelerates the task of deleting duplicate entries considerably as compared to READ-LOOP-DELETE-ENDLOOP.
DELETE ADJACENT DUPLICATES FROM ITAB COMPARING K.
This is much more optimized as compared to
READ TABLE ITAB INDEX 1 INTO PREV_LINE.
LOOP AT ITAB FROM 2 INTO WA.
IF WA = PREV_LINE.
DELETE ITAB.
ELSE.
PREV_LINE = WA.
ENDIF.
ENDLOOP.
11. "DELETE itab FROM ... TO ..." accelerates the task of deleting a sequence of lines considerably as compared to DO -DELETE-ENDDO.
DELETE ITAB FROM 450 TO 550.
This is much more optimized as compared to
DO 101 TIMES.
DELETE ITAB INDEX 450.
ENDDO.
12. Copying internal tables by using ITAB2[ ] = ITAB1[ ] as compared to LOOP-APPEND-ENDLOOP.
ITAB2[] = ITAB1[].
This is much more optimized as compared to
REFRESH ITAB2.
LOOP AT ITAB1 INTO WA.
APPEND WA TO ITAB2.
ENDLOOP.
13. Specify the sort key as restrictively as possible to run the program faster.
SORT ITAB BY K. makes the program runs faster as compared to SORT ITAB.
Internal Tables contd
Hashed and Sorted tables
1. For single read access hashed tables are more optimized as compared to sorted tables.
2. For partial sequential access sorted tables are more optimized as compared to hashed tables
Hashed And Sorted Tables
Point # 1
Consider the following example where HTAB is a hashed table and STAB is a sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE HTAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
This runs faster for single read access as compared to the following same code for sorted table
DO 250 TIMES.
N = 4 * SY-INDEX.
READ TABLE STAB INTO WA WITH TABLE KEY K = N.
IF SY-SUBRC = 0.
ENDIF.
ENDDO.
Point # 2
Similarly for Partial Sequential access the STAB runs faster as compared to HTAB
LOOP AT STAB INTO WA WHERE K = SUBKEY.
ENDLOOP.
This runs faster as compared to
LOOP AT HTAB INTO WA WHERE K = SUBKEY.
ENDLOOP. -
Bootcamp: WinXP: Network initialization takes too much time
Hey
I've set up a BootCamp partition on my Macbook 6,1: Windows XP Professional 32-bit SP3 (up-to-date).
Its a fresh installation, with latest BootCamp drivers.
From start on, the network-initialization takes too much time:
- Windows boots up
- Enter password
- Desktop appear
- after 2-3 minutes, the network symbols appear in the taskbar.
If I open the System Control panel and want to enter the networking-devices it loads until network is really initiated.
I also can't browse to any web-site (through IE).
So, it seems that something blocks or prevent the networking from fast start-up.
I've tried to disable WLAN, same problem
If I disconnect all connections (no LAN-cable, no WLAN configured) its still the same problem.
Does anybody have a similar issue?You can read: I have installed BootCamp (Lions one said no, but I have an MacBook 6,1 - Yep, this one was shipped with Snow Leopard, so I have the "old" BootCamp drivers with XP-drivers on my Install-disc.).
The BootCamp drivers from the Mac OSX 10.6 CD are installed and updated through Apple-Update.
So, WinXP should be supported? -
Language import takes too much time in ECC EHP4 system
Hi All,
We are trying to import the Languages in to ECC EHP4 system ...But each Language takes almost 48 hours to get imported.
and even the language supplementation takes too much time ...
Please help in tuning the memory parameter if required ...and the Data Base is DB2Hi,
So what is your system details ?
OS ?
Have you just installed the system ?
Just upgraded to EHP4 ?
Any trace files ?
what is the system load.....
Mark -
Updating the ipod with itunes takes too much time
i have like 3000 songs in my itunes library ,but when i change some track's names o change anything of my songs in a diferent way like putting their names in capital letters or changing their artworks..etc ,itunes takes too much time updating the changes to my ipod................. why?
ipodvideo Windows XP
Windows XPGenerally the iPod update is pretty quick unless you are making many hundreds of changes at a time. It could be the USB port is slow, try it in another port, preferably on the back of the PC, some computers have underpowered ports on the front panel. Also the recommended system requirements are for USB 2, it will work from a USB 1 port but much slower if that's the type of port you have.
-
The queries takes too much time
Hi,
I want to compare time of queries between Spatial and PostGIS, but Spatial take too much time. I created spatial indexes in both databases so I have no idea where can be the problem :-(. I'm actually student and I try this for my essay, but I start with Oracle by myself so I'll appreciate some help.
I have table with cca 7000 polygons and I want to create new table like dissolve these polygons by attribute katuze.
Query in Spatial takes 8506 s :
CREATE TABLE dp_diss_p AS
SELECT KATUZE_KOD, SDO_AGGR_UNION(
MDSYS.SDOAGGRTYPE(a.geom, 0.005))GEOM
FROM dp_plochy a GROUP BY a.KATUZE_KOD;
Query in PostGIS takes 10.149 s :
CREATE TABLE plochy_diss AS
SELECT st_union(geom) AS geom
FROM plochy
GROUP BY katuze_kod;
I really don't know what to do, please give somebody me some advice :-(. Please excuse my English.
Thx EvaI tryed use the sdo_aggr_set_union like this:
CREATE OR REPLACE FUNCTION get_geom_set (table_name VARCHAR2,
column_name VARCHAR2,
predicate VARCHAR2 := NULL)
RETURN SDO_GEOMETRY_ARRAY DETERMINISTIC AS
type cursor_type is REF CURSOR;
query_crs cursor_type ;
g SDO_GEOMETRY;
GeometryArr SDO_GEOMETRY_ARRAY;
where_clause VARCHAR2(2000);
BEGIN
IF predicate IS NULL
THEN
where_clause := NULL;
ELSE
where_clause := ' WHERE ';
END IF;
GeometryArr := SDO_GEOMETRY_ARRAY();
OPEN query_crs FOR ' SELECT ' || column_name ||
' FROM ' || table_name ||
where_clause || predicate;
LOOP
FETCH query_crs into g;
EXIT when query_crs%NOTFOUND ;
GeometryArr.extend;
GeometryArr(GeometryArr.count) := g;
END LOOP;
RETURN GeometryArr;
END;
CREATE TABLE dp_diss_p AS
SELECT KATUZE_KOD, sdo_aggr_set_union (get_geom_set ('dp_plochy', 'geom','CTVUK_KOD <>''1'''), .0005 ) geom
FROM dp_plochy a GROUP BY a.KATUZE_KOD;
It takes 21.721 s, but it return type of collection and I can't export it to *.shp because Georaptor export it like 'unknown' . Has anybody idea what can I do with it to export it like polygon?
Eva -
in one website, its taking time to load the page, on other PC its not taking any time( with internet explorer) in my PC other websites are opening quickly but this website takes too much time with firefox
Zepo wrote:
My iMac has been overwhelmed almost since I bought it new. After some digging the guiness bar suggested its my Aperture library being on the same
internal Tera byte drive as my operating system.
Having a single internal hard drive overfilled (drives slow as they fill) is very likely contributing to your problems, but IMO "my Aperture library being on the same internal Tera byte drive as my operating system" is very unlikely to be contributing to your problems. In fact the Library should stay on an underfilled (roughly, for speed, I would call ~half full "underfilled") internal drive, not on the Drobo.
Instead build a Referenced-Masters workflow with the Library and OS on an internal drive, Masters on the Drobo, OS 10.6.8 (there have been issues reported with OS 10.7 Lion). Keep Vault backup of the Library on the Drobo, and of course back up all Drobo data off site.
No matter what you do with i/o your C2D Mac is not a strong box for Aperture performance. If you want to really rock Aperture move to one of the better 2011 Sandy Bridge Macs, install 8 GB or more of RAM and build a Referenced-Masters workflow with the Library and OS on an internal solid state drive (SSD).
Personally I would prefer investing in a Thunderbolt RAID rather than in a Drobo but each individual makes his/her own network speed/cost decisions. The Drobo should work OK for referenced Masters even though i/o is limited by the Firewire connection.
Do not forget the need for off site backup. And I suggest that in the process of moving to your new setup it is most important to get the data safely and redundantly copied and generally best to disregard how long it may take.
HTH
-Allen Wicks -
Why deployment with VC 3.1 to weblogic 4.5.1 take so much time
Hi
i am working with visual cafe 3.1 enterprise edition and weblogic 4.5.1
,when i deploy an enterprise bean throught the VC
to the weblogic it takes me about an 1/2 hour ,does any one knows why it
takes so much time .I wish someone would tell me too. It's so bad, I can't have my developers
use it. Yes it works fine when you have th 1/2 hour to wait.
BigMAN
Nielsen Media Research
Niv Haik <[email protected]> wrote in message
news:85jvb6$o2o$[email protected]..
Hi
i am working with visual cafe 3.1 enterprise edition and weblogic 4.5.1
,when i deploy an enterprise bean throught the VC
to the weblogic it takes me about an 1/2 hour ,does any one knows why it
takes so much time . -
i m using apple mac pc, when we start windows 7 , apple mouse doesn't work properly it take to much time to gain signals from the pc and many times it not work but when we use mac it moves fastly and works properly. please suggest me.
thanks
ravi
<Email removed by Host>sounds more like Bluetooth rather than moue, but w/o knowing w/o posting mac model type/year we... will... not... know... what you have
All computers are personal computers, a PC though is also "non-Apple" in common usage.
Mac also is platform and OS.
Very confused reading what you are trying to tell us. -
After installing Roboform 2.0.0;
Starting Firefox started to take too much time.
When i open a new tab, until the tab is fully loaded and Roboform realizes which website it is, Firefox does not respond.
When i tell Roboform to fill the username and password fields in a tab, until Roboform submits Firefox does not respond.
I mean, as if Roboform is not working background and Firefox cannot work multithread because of it. When i navigate from one tab to another, Roboform is also checking the tab and meanwhile Firefox is waiting for Roboform to finish its work.
Please help me about this problem.
Regards
İlhan Tanrıverdi ([email protected])Start Firefox in <u>[[Safe Mode]]</u> to check if one of the extensions is causing the problem (switch to the DEFAULT theme: Firefox (Tools) > Add-ons > Appearance/Themes).
*Don't make any changes on the Safe mode start window.
*https://support.mozilla.com/kb/Safe+Mode
*https://support.mozilla.com/kb/Troubleshooting+extensions+and+themes -
Take too much time to render the OAF page from JDeveloper
Hi Gurus:
It takes over 30 or 40 minutes to run a OAF page from JDeveloper, my JDeveloper is 10.1.3.3.0, and connect to the remote the DB server. I don't use the VPN. And one of my colleague in the same office take about 8 minutes to run a OAF page. It's painful to take too much time, Anyone can shed some light on this?
Regards,
JiangThanks John.
I will move my question to Technology - OA Framework
Regards,
Flywin Jiang -
Parsing the query takes too much time.
Hello.
I hitting the bug in в Oracle XE (parsing some query takes too much time).
A similar bug was previously found in the commercial release and was successfully fixed (SR Number 3-3301916511).
Please, raise a bug for Oracle XE.
Steps to reproduce the issue:
1. Extract files from testcase_dump.zip and testcase_sql.zip
2. Under username SYSTEM execute script schema.sql
3. Import data from file TESTCASE14.DMP
4. Under username SYSTEM execute script testcase14.sql
SQL text can be downloaded from http://files.mail.ru/DJTTE3
Datapump dump of testcase can be downloaded from http://files.mail.ru/EC1J36
Regards,
Viacheslav.Bug number? Version fix applies to?
Relevant Note that describes the problem and points out bug/patch availability?
With a little luck some PSEs might be "backported", since 11g XE is not base release e.g. 11.2.0.1. -
I have a problem with elements 8. When ever I start a new project, the program searches for the old, latest used filmmaterial, I have used before. That means, if I use a different DVD, it take so much time, until the program starts. So how can I stopp the automatically uploading of old material?
ThanksYou have 2 unreachable statements in this method.
public static int eval(String s2, String op, String s3) {
return lookup(s2);
return lookup(op);
return lookup(s3);
} You're missing a } at the end of this method:
public static int lookup(String s) {
for(int k = 0; k < symbols.length; k++){
String symbol = symbols[k];
if(s.equals(symbol))
return k;
}You have some loose } and ; at the end of the file:
public static void main(String args[])
commandline();
} -
Sap application takes tooo much time to start
sir my sap solution manager server takes too much time to start,
when i saw in windows mmc it shows that system is accesing the <sid>\sys\global\security\lib\engine\iaik_jce.jar(which is accesing by nearly 20 to 30 min )during this in application log it shows error like ,
1. transation canceled 15 100()
2.runtime error" CREATE_DATA_UNKNOWN_TYPE
CAN any one hlphi thanks for responding
actually today when i start the system it shows many error in every miniute like operating system call connection failed(no. 100061)
with every same in my last post .
this is st22 dump
Current Date Time Application Server User Name Client ID Keep Name of runtime error Exception Appl. component Report Name Index of Work Process Transaktions-ID
17.06.2010 10:11:33 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 0 70C979DF97D7F14AB156000C29EE2DC4
17.06.2010 10:11:32 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 0 70C979DF97D7F14AB156000C29EE2DC4
17.06.2010 10:10:42 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 1 70C979DF97D7F14AB156000C29EE2DC4
17.06.2010 10:10:38 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 1 70C979DF97D7F14AB156000C29EE2DC4
17.06.2010 10:09:29 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 1 70C979DF97D7F14AB156000C29EE2DC4
17.06.2010 10:07:37 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 0 70C979DF97D7F14AB156000C29EE2DC4
17.06.2010 10:07:36 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 0 70C979DF97D7F14AB156000C29EE2DC4
17.06.2010 10:07:28 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 0 70C979DF97D7F14AB156000C29EE2DC4
17.06.2010 10:07:06 solman_ASM_00 SMD_RFC 300 C CREATE_DATA_UNKNOWN_TYPE CX_SY_CREATE_DATA_ERROR 0 70C979DF97D7F14AB156000C29EE2DC4
Maybe you are looking for
-
Use of forecast planning function in Integrated planning
Hai , Can I Know What is the use forecast planning function in Integrated planning. Give me One Example. Please tell What i hv to select in aggregation level, Filter, and Planning function for doing forecast. Regards Kishore.
-
Choosing more than 10 recipients for sms
Can anybody tell me how to add more than 10people to a sms? If this is not possible then what have apple been playing at? I have had various handsets ovr the past numerous years and have not had this issue before, if thisnis big problem is there any
-
In Our scenario we have a file as a input. based on the key in a field we need to split the message. can anyone suggest how to do it. Here entire file is considered as a single message, how to run a loop for entire message.
-
Secondry Resource confirmation
Dear All. We are using Process Order, for some phases we have assigned secondry resources. If my yield quantity is 100 Kg then i require 10 Hr in phase & also in secondry resources while confirmation of phase if i do partial confirmation for 50 Kg sy
-
PROBLEM With my procedure (BULK COLLECT)
This s my code CREATE OR REPLACE PROCEDURE BULK_COLLECT_tipo_per IS TYPE T_ARRAY_TAB IS TABLE OF tipo_per_22%ROWTYPE INDEX BY PLS_INTEGER; T_ARRAY_COL T_ARRAY_TAB; cursor CUR is SELECT CIF_ID, MAX(C1) CL_ORI, MAX(C2) F_COTI, MAX(C3) EX_ORI, MAX(C4) F