Many extend decrease performance?
hallo,
it is thru, if a segment is fragmented in many extend, then performance decrease?
if yes, what is a threshold?
if 10 is the threshold, this query:
SELECT SEGMENT_NAME, BYTES, BLOCKS, EXTENTS FROM DBA_SEGMENTS
WHERE OWNER='xxxxx' AND EXTENTS>10 ;
show "bad" segment?
Thanks in advance
Multiple extents would only affect the performance of full table scans since single block IO is used for key access to an index and indexed access to a table block from an indexed key.
With a full table scan the number of extents only affects DML IO performance if the extent size is not an even multiple of the multiblock extent size since a extra IO is now required to get the odd blocks.
Example 64K multi-block IO size. If the table is in 100 64k extents it will take 100 IO requests to read the table. If the table is in a single 6400k extent it will take 100 IO requests to read the table. Since 100 = 100 the performance is the same.
I did not follow the link but there should be a proof at Tom's site. It isn't very hard to run this test and check the IO statistics oneself.
HTH -- Mark D Powell --
Similar Messages
-
Short dump in alv too many parameters in perform)
I M GETTING PROBLEM IN THIS PROGRAM AGAIN.
Getting short dump too many paramamerets in perform
<CODE>Report Z_50840_ALV
Line-size 80
Line-count 64
Message-id ZZ
No Standard Page Heading.
Copyright statement *
@ copyright 2007 by Intelligroup Inc. *
Program Details *
Program Name: Z_50840_ALV
Date : 19.07.2007
Author : Vasudevaraman V
Description : Test Program
Transport No:
Change Log *
Date :
Author :
Description :
Transport No:
Tables *
Tables: vbrk.
Type Pools *
Type-Pools: SLIS.
Variables *
Data: GV_REPID TYPE SY-REPID.
Structures *
Data: BEGIN OF GIT_VBRK OCCURS 0,
VBELN LIKE VBRK-VBELN, "Billing Document
FKART LIKE VBRK-FKART, "Billing Type
KNUMV LIKE VBRK-KNUMV, "Number of the document condition
BUKRS LIKE VBRK-BUKRS, "Company code
NETWR LIKE VBRK-NETWR, "Net value in document currency
WAERK LIKE VBRK-WAERK, "SD document currency in basic list
END OF GIT_VBRK,
GIT_FCAT TYPE SLIS_T_FIELDCAT_ALV,
WA_FCAT TYPE slis_fieldcat_alv,
GIT_EVENTS TYPE SLIS_T_EVENT,
WA_EVENTS TYPE SLIS_ALV_EVENT.
Field Symbols *
Field-symbols: <fs_xxxx>.
Selection Screen *
SELECTION-SCREEN BEGIN OF BLOCK B1 WITH FRAME TITLE TEXT-001.
SELECT-OPTIONS: S_VBELN FOR VBRK-VBELN.
PARAMETERS: LISTDISP RADIOBUTTON GROUP G1,
GRIDDISP RADIOBUTTON GROUP G1 DEFAULT 'X'.
SELECTION-SCREEN END OF BLOCK B1.
Initialization *
Initialization.
GV_REPID = SY-REPID.
At Selection Screen *
At selection-screen.
Start Of Selection *
Start-of-selection.
SET PF-STATUS 'ABC'(001).
PERFORM GET_BILLING_DETAILS.
PERFORM FIELD_CATALOGUE.
PERFORM GET_EVENTS.
End Of Selection *
End-of-selection.
PERFORM DISPLAY_BILLING_DETAILS.
Top Of Page *
Top-of-page.
End Of Page *
End-of-page.
*& Form GET_BILLING_DETAILS
text
--> p1 text
<-- p2 text
FORM GET_BILLING_DETAILS .
SELECT VBELN
FKART
KNUMV
BUKRS
NETWR
WAERK
FROM VBRK
INTO TABLE GIT_VBRK
WHERE VBELN IN S_VBELN.
IF SY-SUBRC = 0.
SORT GIT_VBRK BY VBELN.
ENDIF.
ENDFORM. " GET_BILLING_DETAILS
*& Form FIELD_CATALOGUE
text
--> p1 text
<-- p2 text
FORM FIELD_CATALOGUE .
CALL FUNCTION 'REUSE_ALV_FIELDCATALOG_MERGE'
EXPORTING
I_PROGRAM_NAME = GV_REPID
I_INTERNAL_TABNAME = 'GIT_VBRK'
I_STRUCTURE_NAME = I_STRUCTURE_NAME
I_CLIENT_NEVER_DISPLAY = 'X'
I_INCLNAME = GV_REPID
I_BYPASSING_BUFFER = 'X'
I_BUFFER_ACTIVE = ' '
CHANGING
CT_FIELDCAT = GIT_FCAT
EXCEPTIONS
INCONSISTENT_INTERFACE = 1
PROGRAM_ERROR = 2
OTHERS = 3
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDFORM. " FIELD_CATALOGUE
*& Form DISPLAY_BILLING_DETAILS
text
--> p1 text
<-- p2 text
FORM DISPLAY_BILLING_DETAILS .
IF LISTDISP = 'X'.
CALL FUNCTION 'REUSE_ALV_LIST_DISPLAY'
EXPORTING
I_INTERFACE_CHECK = ' '
I_BYPASSING_BUFFER = 'X'
I_BUFFER_ACTIVE = ' '
I_CALLBACK_PROGRAM = GV_REPID
I_CALLBACK_PF_STATUS_SET = ' '
I_CALLBACK_USER_COMMAND = ' '
I_STRUCTURE_NAME = I_STRUCTURE_NAME
IS_LAYOUT = IS_LAYOUT
IT_FIELDCAT = GIT_FCAT
IT_EXCLUDING = IT_EXCLUDING
IT_SPECIAL_GROUPS = IT_SPECIAL_GROUPS
IT_SORT = IT_SORT
IT_FILTER = IT_FILTER
IS_SEL_HIDE = IS_SEL_HIDE
I_DEFAULT = 'X'
I_SAVE = ' '
IS_VARIANT = IS_VARIANT
IT_EVENTS = GIT_EVENTS
IT_EVENT_EXIT = IT_EVENT_EXIT
IS_PRINT = IS_PRINT
IS_REPREP_ID = IS_REPREP_ID
I_SCREEN_START_COLUMN = 0
I_SCREEN_START_LINE = 0
I_SCREEN_END_COLUMN = 0
I_SCREEN_END_LINE = 0
IR_SALV_LIST_ADAPTER = IR_SALV_LIST_ADAPTER
IT_EXCEPT_QINFO = IT_EXCEPT_QINFO
I_SUPPRESS_EMPTY_DATA = ABAP_FALSE
IMPORTING
E_EXIT_CAUSED_BY_CALLER = E_EXIT_CAUSED_BY_CALLER
ES_EXIT_CAUSED_BY_USER = ES_EXIT_CAUSED_BY_USER
TABLES
T_OUTTAB = GIT_VBRK
EXCEPTIONS
PROGRAM_ERROR = 1
OTHERS = 2
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ELSE.
CALL FUNCTION 'REUSE_ALV_GRID_DISPLAY'
EXPORTING
I_INTERFACE_CHECK = ' '
I_BYPASSING_BUFFER = 'X'
I_BUFFER_ACTIVE = ' '
I_CALLBACK_PROGRAM = GV_REPID
I_CALLBACK_PF_STATUS_SET = ' '
I_CALLBACK_USER_COMMAND = 'USER_COMMAND'
I_CALLBACK_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_TOP_OF_PAGE = ' '
I_CALLBACK_HTML_END_OF_LIST = ' '
I_STRUCTURE_NAME = I_STRUCTURE_NAME
I_BACKGROUND_ID = ' '
I_GRID_TITLE = I_GRID_TITLE
I_GRID_SETTINGS = I_GRID_SETTINGS
IS_LAYOUT = IS_LAYOUT
IT_FIELDCAT = GIT_FCAT
IT_EXCLUDING = IT_EXCLUDING
IT_SPECIAL_GROUPS = IT_SPECIAL_GROUPS
IT_SORT = IT_SORT
IT_FILTER = IT_FILTER
IS_SEL_HIDE = IS_SEL_HIDE
I_DEFAULT = 'X'
I_SAVE = ' '
IS_VARIANT = IS_VARIANT
IT_EVENTS = GIT_EVENTS
IT_EVENT_EXIT = IT_EVENT_EXIT
IS_PRINT = IS_PRINT
IS_REPREP_ID = IS_REPREP_ID
I_SCREEN_START_COLUMN = 0
I_SCREEN_START_LINE = 0
I_SCREEN_END_COLUMN = 0
I_SCREEN_END_LINE = 0
I_HTML_HEIGHT_TOP = 0
I_HTML_HEIGHT_END = 0
IT_ALV_GRAPHICS = IT_ALV_GRAPHICS
IT_HYPERLINK = IT_HYPERLINK
IT_ADD_FIELDCAT = IT_ADD_FIELDCAT
IT_EXCEPT_QINFO = IT_EXCEPT_QINFO
IR_SALV_FULLSCREEN_ADAPTER = IR_SALV_FULLSCREEN_ADAPTER
IMPORTING
E_EXIT_CAUSED_BY_CALLER = E_EXIT_CAUSED_BY_CALLER
ES_EXIT_CAUSED_BY_USER = ES_EXIT_CAUSED_BY_USER
TABLES
T_OUTTAB = GIT_VBRK
EXCEPTIONS
PROGRAM_ERROR = 1
OTHERS = 2
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
ENDIF.
ENDFORM. " DISPLAY_BILLING_DETAILS
*& Form GET_EVENTS
text
--> p1 text
<-- p2 text
FORM GET_EVENTS .
CALL FUNCTION 'REUSE_ALV_EVENTS_GET'
EXPORTING
I_LIST_TYPE = 0
IMPORTING
ET_EVENTS = GIT_EVENTS
EXCEPTIONS
LIST_TYPE_WRONG = 1
OTHERS = 2
IF SY-SUBRC <> 0.
MESSAGE ID SY-MSGID TYPE SY-MSGTY NUMBER SY-MSGNO
WITH SY-MSGV1 SY-MSGV2 SY-MSGV3 SY-MSGV4.
ENDIF.
LOOP AT GIT_EVENTS INTO WA_EVENTS.
CASE WA_EVENTS-NAME.
WHEN 'USER_COMMAND'.
WA_EVENTS-FORM = 'USER_COMMAND'.
ENDCASE.
MODIFY GIT_EVENTS FROM WA_EVENTS INDEX SY-TABIX.
ENDLOOP.
ENDFORM. " GET_EVENTS
FORM USER_COMMAND.
WRITE :/ 'USER_COMMAND'.
ENDFORM.</CODE>.
REGARDS,
SURAJI have run the program in my system and getting the following display instead of dump.
Bill.Doc. BillT Doc.cond. CoCd Net value Curr.
90000763 B2 0000002800 1000 0.00 DEM
90005177 F2 0000012141 1000 5,500.00 DEM
90005178 F2 0000012144 1000 32,838.00 DEM
90005179 F2 0000012146 1000 6,100.00 DEM
90005180 F2 0000012147 1000 6,100.00 DEM
90005182 S1 0000012226 1000 5,500.00 DEM
90005183 S1 0000012227 1000 32,838.00 DEM
90005184 S1 0000012228 1000 6,100.00 DEM
90005185 S1 0000012229 1000 6,100.00 DEM
90005186 F2 0000012230 1000 6,100.00 DEM
90005187 F2 0000012231 1000 6,100.00 DEM
90005188 F2 0000012232 1000 32,778.00 DEM
90005189 F2 0000012233 1000 34,354.00 DEM
90005190 F2 0000012234 1000 19,991.00 DEM
90005191 F2 0000012235 1000 19,719.00 DEM
90005192 F2 0000012236 1000 43,004.00 DEM
90005193 F2 0000012237 1000 9,242.00 DEM
90005194 F2 0000012238 1000 12,156.00 DEM
90005195 F2 0000012239 1000 7,294.00 DEM
90005196 F2 0000012240 1000 9,694.00 DEM
90005197 F2 0000012241 1000 32,838.00 DEM
90005198 F2 0000012242 1000 9,352.00 DEM
90005199 F2 0000012243 1000 13,013.00 DEM -
If we create more indexes in what way it decreases performance ??
If we create more indexes in what way it decreases performance
> Even though if u want to do some DML operations
on that table u have to disable the indexes and do
the operations.
Could you prove this or give an example?
Did I do something wrong in the past as I did not observe this? -
So many loops decreasing the performance
So many loops are in loop, Is there any other statement to improve the performance
LOOP AT p0000 WHERE begda LE pn-endda
AND endda GE pn-begda.
wa_datatab-pernr = p0000-pernr.
LOOP AT p0001 WHERE begda LE p0000-endda
AND endda GE p0000-begda.
wa_datatab-werks = p0001-werks.
wa_datatab-bukrs = p0001-bukrs.
wa_datatab-persg = p0001-persg.
read technical date of entry and first working day
CLEAR emp_period.
LOOP AT p0041.
CLEAR p41dat.
DO 12 TIMES
VARYING p41dat FROM p0041-dar01 NEXT p0041-dar02.
IF ( p41dat-dar EQ ca_dar01 OR
p41dat-dar EQ ca_dar04 OR
p41dat-dar EQ ca_dar40 ) AND
NOT p41dat-dat IS INITIAL.
PERFORM employment_period USING p41dat-dat
pn-begda
emp_period.
CASE p41dat-dar.
WHEN ca_dar01.
wa_datatab-dat01 = p41dat-dat.
wa_datatab-dat01_period = emp_period.
WHEN ca_dar40.
wa_datatab-dat40 = p41dat-dat.
wa_datatab-dat40_period = emp_period.
WHEN ca_dar04.
wa_datatab-dat04 = p41dat-dat.
wa_datatab-dat04_period = emp_period.
ENDCASE.
ENDIF.
ENDDO.
ENDLOOP.
ENDLOOP.hi check this prog how to get the best performance..
report ztest.
tables:pa0002,pa0008,pa0021,pa0041.
data: begin of itab occurs 0,
pernr like pa0002-pernr,
vorna like pa0002-vorna,
nachn like pa0002-nachn,
end of itab.
data: begin of itab1 occurs 0,
pernr like pa0008-pernr,
begda like pa0008-begda,
stvor like pa0008-stvor,
ansal like pa0008-ansal,
end of itab1.
data :begin of itab2 occurs 0,
pernr like pa0021-pernr,
favor like pa0021-favor,
fanam like pa0021-fanam,
end of itab2.
data:begin of itab3 occurs 0,
pernr like pa0041-pernr,
dar01 like pa0041-dar01,
dat01 like pa0041-dat01,
end of itab3.
data:begin of final occurs 0,
pernr like pa0002-pernr,
vorna like pa0002-vorna,
nachn like pa0002-nachn,
begda like pa0008-begda,
stvor like pa0008-stvor,
ansal like pa0008-ansal,
favor like pa0021-favor,
fanam like pa0021-fanam,
dar01 like pa0041-dar01,
dat01 like pa0041-dat01,
end of final.
select-options:s_pernr for pa0002-pernr,
s_date for sy-datum.
select pernr
vorna
nachn
from pa0002
into table itab
where pernr in s_pernr
and begda le s_date-low
and endda ge s_date-high.
select pernr
begda
stvor
ansal
from pa0008
into table itab1
for all entries in itab
where pernr = itab-pernr
and begda le s_date-low
and endda ge s_date-high.
select pernr
favor
fanam
from pa0021
into table itab2
for all entries in itab1
where pernr = itab1-pernr
and begda le s_date-low
and endda ge s_date-high.
select pernr
dar01
dat01
from pa0041
into table itab3
for all entries in itab2
where pernr = itab2-pernr
and begda le s_date-low
and endda ge s_date-high.
loop at itab.
final-pernr = itab-pernr.
final-vorna = itab-vorna.
final-nachn = itab-nachn.
read table itab1 with key pernr = itab-pernr.
final-begda = itab1-begda.
final-stvor = itab1-stvor.
final-ansal = itab1-ansal.
read table itab2 with key pernr = itab1-pernr.
final-favor = itab2-favor.
final-fanam = itab2-fanam.
read table itab3 with key pernr = itab2-pernr.
final-dar01 = itab3-dar01 .
final-dat01 = itab3-dat01.
append final.
clear final.
endloop.
loop at final.
write:final-pernr ,
final-vorna ,
final-nachn ,
final-begda ,
final-stvor ,
final-ansal ,
final-favor ,
final-fanam ,
final-dar01 ,
final-dat01 .
endloop.
regards,
venkat. -
Illustrator is installed on a windows Laptop, Windows Seven, Illustator 16.0.3. (last update available).
We tried to reinstall the application, but no effect.
Is this nomral that Illustrator create so many file, hundreds in a few hours, and takes 3 to 7 GO space disk ?
When I close Illustrator, all files are deleted.
When working with illustrator the computer is slow.
ThankI am aware of the filesystem defragmentation, but most require offline operation and if you pre-write the disk space you need, then this is only going to be required if the file grows a lot further than you assumed.
The myths about filesystems suchas ext3 and XFS being "good" at locality and never needing de-fragmentation are unfortunately all incorrect and based upon the average user's use of text files and other relatively small files. Other filesystems on UNIX are better or worse than these two. In a server environment with a 24 hour operation, there is no down time available to perform defrag, unless it is scheduled or you use an HA solution that allows it. By far the simplest solution is to pre-write the file as fast as possible.
The effects of doing this are marked:
1) Write your file
2) defrag it;
3) Copy the key/data pairs from the file to yet another file, but this time in key sorted order (traverse the original, write the new one);
4) Defrag this new file and compress the keys etc
Then measure the performance of file traversal. Here you will reap the benefit of allowing the underlying operating system/file system to manage the reads and writes as it will almost always perform track reads/writes.
Of course if your database access is entirely random, then the benefit is reduced, but even so, the locality of allocation will reduce the disk dancing required until your data is in cache and the trickle deamon will be more efficient.
Hence, the ability to hole fill is a trivial piece of code that is filesystem agnostic and entirely backward compatibile (as if you don't make a call to do it then it will just behave as it always has). Something as simple as DB->pre_alloc(DB, sizeinMb); would do it (little point in doing this for less than megabytes).
Jim -
Q: ABAP code from db to memory decreases performance?
Hi Gurus,
We have a problem with some ABAP code (a start routine in a BI load). Basically the situation is: we had some code that builds a hierarchy (and inserts into hierarchy table) based on an attribute load, which worked fine but was to slow.
As we do not need the hierarchy anymore we changed the code to only build the hierarchy in memory (the reason why we need it at all is because building this is the only way we can ensure to load the right records in the attribute load) and now, it is sloweru2026.which we do not understand.
In general we have replaced:
SELECT SINGLE * FROM /BIC/HZTVFKORG INTO nodelast
WHERE nodeid = lastnode2.
With:
READ TABLE VirtHierarchy INTO nodelast
WITH KEY nodeid = lastnode2.
And replaced:
UPDATE /BIC/HZTVFKORG FROM nodelast.
With:
MODIFY TABLE VirtHierarchy FROM nodelast.
And replaced:
INSERT INTO /BIC/HZTVFKORG VALUES node.
With:
APPEND node TO VirtHierarchy.
As we see it, this should increase the performance of the start routine and the load (it takes several hours for just 50000 records), but it is actually running slower now...
Does anybody have any idea about why this is not improving performance?
Thank you in advance,
MikaelDear Mikael Kilaker,
There are few reason:
1. Data overload in memory.
, if you try to execute
SELECT SINGLE * FROM /BIC/HZTVFKORG INTO nodelast
WHERE nodeid = lastnode2.
With:
READ TABLE VirtHierarchy INTO nodelast
WITH KEY nodeid = lastnode2.
And replaced:
UPDATE /BIC/HZTVFKORG FROM nodelast.
With:
MODIFY TABLE VirtHierarchy FROM nodelast.
And replaced:
INSERT INTO /BIC/HZTVFKORG VALUES node.
With:
APPEND node TO VirtHierarchy.
inside any loop conditions, this approach will make the system slow because it will load entire table into memory then system still need to cater space for selected value thus make system not really effective when you execute large volume of data.
2. Unsorted data.
It is really great practice if you sort nodelast. It is extra steps but the effect greatly decreased response time when system manipulating sorted data in the internal table.
3. Use binary search in READ table.
Try to use this code
READ TABLE VirtHierarchy INTO nodelast
WITH KEY nodeid = lastnode2 BINARY SEARCH.
this practice also will increase performance when you execute large data inside internal table.
Do reward points if this helps you -
Does pure activation of BCT DataSources decrease performance?
Hello,
if I just activate (all) datasources from BCT in an ECC system (without actually using them for data uploading to BW), does this decrease any performance in the ECC? Or is it performance wise neutral as the activation only creates some metadata?
Full points will be assigned!
Thanks a lot,
GermanHi,
keep in mind, that the Data for BW is taken out of the DeltaQueue.
Before the Data achieves the Deltaqueue it is either stored in VBDATA or else in another QRFC-Queue (I call it extraction queue). From where its coming into the Deltaqueue is controlled by the method maintained in the logistics-extractor.
The VBDATA is the table where ALL POSTING RECORDS ARE STORED!
So when you have activated the extractors for logisitcs, all posts of documents in the system (e.g. Sales docs ...) run through the VBDATA as well as the V3-Posts of these documents for R/3.
If the V3-Programs are not scheduled, there will be an overflow to the table and the system is dead, means none can post any documents.
This can only occur if unseralized V3 is chosen.
Delta Queued writes data into the QRFC-Queue.
As long as you haven't run the V3-Programm it will stay there.
The V3-Programm takes the data and writes it to the DeltaQueue for BW.
If no successful init has taken place, the data is gone then.
The request from BW only takes the data out of the delta queue, not out of the extraction queue I've descirbed earlier.
Cheers
Sven -
Embedded jar files decrease performance
Hi.
When deploying a project using the SAP BAPI eWay we get this warning:
[#|2008-07-24T17:07:59.687+0200|WARNING|IS5.1.3|javax.enterprise.system.tools.deployment|_ThreadID=18; ThreadName=http35000-Processor1;|Component svc_BAPItest_ISH_out.jar contains one or more embedded jars. This will significantly decrease deployment performance. It is recommended to move the embedded jar file(s) to top level of the EAR.|#]
Why? As in other projects we imported some jar-files (e.g. log4j) to the jcd(s). However, for the other projects we never got this warning.
Does the running performence decreases as well? How do we move the files to top level of the EAR?
Best Regards,
Heiner.
(Using JCAPS 5.1.3 ESR Rollup U2, Design: WinXP, Runtime: Solaris)Is there any web appliction server or j2ee server that has their own classloader to dynamically extract and hunt down things within a .ear or .war file without unarchiving it to disk? It appears that everyone I have ever seen creates a temporary file on disk.
A fellow colleague of mine indicated that with today's computer speeds, it is actually faster in most cases to read/decompress on the fly in memory than it is to decompress to disk, then read from disk. The reason being that one of the biggest bottlenecks is IO. Reading an uncompressed file from disk may be slower than reading a compressed file from disk. The IO is slow, but the speed of a largely compressed file to uncompress is faster than reading the entire thing from the HD. This may not always be the case. Considering OS caching of recent file use from the HD, etc. -
How to get how many transcations were performed in the system
Hi All,
Could any one tell me, is there a way we can get from the system how many transactions (hiring, leave of absence, terminations, salary changes, etc) were performed.
Regards
SiriHi!
SE16 through table can be view
PA0000> Execute .
Regards
Sheetal -
Decreasing performance when executing consecutive XQuery Update operations
hello,
I am writing an application in java using Berkeley DB XML (2.5.16). I have to run a test over the application, in which a lot of XQuery Update operations have to be executed. All update operations are insert statements. In the beginning the test works fine. However, as the number of updates increases (over 1000 XQuery Updates), there is also a great increase in time of execution. How can I enhance the performance of updates?
Note that:
1) The environment configuration is the following:
EnvironmentConfig envConfig = new EnvironmentConfig();
envConfig.setAllowCreate(true);
envConfig.setInitializeCache(true);
envConfig.setCacheSize(512*1024*1024);
envConfig.setInitializeLocking(false);
envConfig.setInitializeLogging(false);
envConfig.setTransactional(false);
2) No transactions are used
3) The container is a node container
4) Auto-indexing is on (I need that)
5) The container includes only one xml of 2,5MB size.
6) In some insert statements 1000 nodes are to be inserted.
7) I am running windows vista on a 4GB Ram pc.
Thank you in advance,
theoHi Lucas,
Thanks a lot for your reply! However, I have a few more questions.
Given that the xml is about 2.5MB size, is it considered to be that big for Berkeley DB so it needs to be break down into smaller documents?
Also, after having executed ~4000 xquery update insert operations and doubled xml’s size, the performance it’s really getting worse… An insertion may even take 1.5 min, when for each of the first 3000 insertions only less than 0.5 sec is needed… Is there something I am missing in the configuration of Berkeley DB? If I set autoindexing off and try to maintain fewer indexes, is it possible to see significantly better performance? Till now I am just getting the following error when I set my indexes and try to execute consequent insertions over the same node:
Exception in thread "main" com.sleepycat.dbxml.XmlException: Error: Sequence does not match type item() - the sequence does not contain items [err:XUDY0027], <query>:1:1, errcode = QUERY_EVALUATION_ERROR+
at com.sleepycat.dbxml.dbxml_javaJNI.XmlQueryExpression_execute__SWIG_1(Native Method)
at com.sleepycat.dbxml.XmlQueryExpression.execute(XmlQueryExpression.java:85)
+<myClasses>+
Thanks,
theo -
Slow MBP keyboard & Apple Extended Keyboard Performance
I have noticed that the above keyboards run slow, meaning there is an increased latency time from keystroke until the character appears, when I use IE 8.0.6001.18702 in Windows XP SP3 running as VM in Parallels 7.0.15107. My base OS is Mac OX 10.7.5. I know this is a rather unique situation; however, the keyboards have only recently begun to misbehave. Any thoughts?
Sorry, but what does this have to do with the iPod touch?
If you're attempting to use a Apple Extended Keyboard with a Windows 7 system and have a problem, you should take this up on a Windows-oriented support site. The Apple Extended Keyboard itself has no programmable keys, so any key remapping would need to be done in Windows, and that's not a subject that should be addressed here (particularly not in the iPod touch forum).
Regards. -
DECREASED performance on K8T Neo after sitting at Windows Log In screen
I just built my Athlon64 system a couple of weeks ago, and I have been very happy with it so far. You can see the specs below. I am having a VERY strange sort of problem though. When I boot the computer up, everything boots fine, and fast with no problems, but when I get to the Windows Log In screen, and DO NOT log in within about 5 to 10 minutes from the time the log in screen appears, the system performance absolutely drops to a crawl, and logging in takes FOREVER. Applications and services take about 10 times longer to load, if at all. Most of the time I can't even get it to finish the initial startup, and have to hit RESET to get it to respond. I have never seen anything like this.
I am running Cool 'n Quiet (BIOS enabled, AMD patch in place) which seems to work when I am within Windows. I will say that if I log in before this problem starts, the system is rock solid, and the performance is great. No other issues of any kind - just this one, which is quite annoying. It even happens if I log off of Windows, and don't log back in within a few minutes.
I am on a domain, and the domain controller is operating normally. All other computers on the network respond normally and have no "log in performance" anxiety problems like my Athlon64 system.
Has anyone else seen such a thing????? I am really confused. I guess I am going to try disabling Cool 'n Quiet and see if that helps, but I sure hate to do that as I really like that feature.Quote
Originally posted by R-Rbit
Check:
1. Did you load the correct Cool 'n' Quiet driver. There are separate drivers for WinXP, ME and W98, 20003.
2. Did you change the Power Scheme to 'Minimal Power Management' in Windows?
I have loaded the driver that AMD specified to use with Windows XP from the amd.com website, and I changed the power settings to minimal. Cool ' Quiet does work fine when I am logged in, but like I said it does somehow slow the computer WAY down when I let it sit at the log in screen. Since I disabled Cool 'n Quiet in the BIOS, I haven't had any problems.
Thanks for the advice though. I appreciate the help. -
Large Images + MPE Hardware = Decreased Performance?
I have been enjoying the performance gain of using the GPU on my graphics card (GTS 250) even though it is not officially supported. I have found that some timelines that take the CPU to 95% in software mode play back using only 25% using the GPU.
I have found only one anomaly so far and I am not sure if it is specific to the mercury playback engine using hardware, using an unsupported card, or just my card/system in particular. If I place large (over 4,320 pixels alone the long edge) Jpeg pictures on the timeline (DSLR 1080p @29.97), animated the position and scale using the standard motion effect and apply a cross dissolve, the MPE using hardware will crash during export or bring it to a very slow grind. It is the only case that I have found where exporting in software mode is much faster.
However, if I reduce all the images so that the longest edge is below 4,000 pixels, the speed of the MPE using hardware returns and exports work fine. I am curious to here if others have noticed any performance lag/problems from using large images?
PPRO CS5 5.0
Nvidia GTS 250
Nvidia Driver Version 197.?Found it...it was on Karle Soule's website. Here is what he says about CS5 and maximum resolutions.
" In the Adobe Mercury Engine, the maximum timeline resolution is 10240 pixels by 8192 pixels, more than enough for any mastering resolution we'll see in the near future. The maximum resolution for clips dropped in any timeline will be limited to 256 megapixels in any direction. So, for example, footage from a 32000 by 8000 pixel sensor could be imported and dropped onto a timeline."
From the bottom of this page: http://blogs.adobe.com/VideoRoad/mercury/
I am sure that Photoshop does a better job scaling images, but for projects with a lot of images, it just does not seem very practical to crop/scale each image base on how much panning will be done. My project is not slowing down due to larger images and playback on the timeline is great, I was just wondering why MPE using hardware bogs down.
My today's standards, an image of 4,300 pixels along the long edge is not that big....
I have found that the problem is related to the cross dissolve. If I remove the cross dissolve the sequence will export with all the GPU goodness speed. Make me wonder if both images have to be loaded by the GPU to calculate the dissolve. If that is the case, then two images over 4096 would put my card over the 8192 maximum image texture...however, the GTX 285 has the same maximum image texture, so I wonder if the problem still occurs on my card/system or all GPU rendering? -
Decreased performance with every 'upgrade'
I use Reader in a corporate environment and of course, at home.
This morning here at work I was prompted to install the latest and greates upgrade to what I presume is the free version or Reader X.
It suggested re-booting after installatin and I complied.
Now I open pdf's that 2 years ago would open in a heartbeat and I could literally fly through to find the informaiton I need. Now they open horribly slow and browsing through them is like watching spagetti sauce run down a wall ...at least for the first couple of minutes ...after which it appears all the indexing is done (for the 200th time on that document) and it finally starts to work.
This is a corporate environment and this increasingly poor performance is costing corporations a lot of money. If I as one user, spend 10 extra minutes per day due to this ...can you imagine the overall cost to corporations globally?? This is nuts Adobe. You are eroding the profit margins of the very corporations that facilitate your existence. Please go back to the drawing board. This isn't new. Each and every 'upgrade' since you included the indexing feature (or made it obvious) ...has resulted in slower loads.i've noticed with the new core center that the fan speed will randomly rise (to an audible level) even when CnQ is keeping the cpu at 800mhz. The only way to slow the fan down again is to close core center and start it again. The settings I use haven't canged, the only cause could be the new core center software.
I haven't updated to the 1.4 bios but I'm about to do that so if things change i'll post back here. -
Do dual monitors decrease performance?
I was wondering if there is any proof that a dual monitor card like a gtx 560ti when run as a dual monitor setup drops fps from the card. Is this a true statement?
Or does it not alter performance at all?
DavidThis setup has not caused me any problems, but I look forward to using a GTX 680 or similar card to bypass my DV deck and use a HDMI connection to my TV.
PS. In case you are wondering, yeah, I'm a lefty.
Maybe you are looking for
-
Can we make use of Transaction code FKMT for vendor line items
Hi Please advise me whether we can make use of Transaction code FKMT (Account assignment model) for vendor line items as follows: For Example: Expenditure Account Dr. Rs.1000 To Vendor A Rs.100
-
IPad 2 stuck in recovery mode and does not respond to usual cures.
I wonder if anyone can suggest what further I can do to get an iPad 2 16MB WiFI model out of recovery mode. The machine was previously working normally but after a charge would not turn on. It connected to iTunes on a Windows system when started us
-
Running 2nd display... (question)
Hey everyone. My brother has a Powermac G5 dual 1.8 with 4GB RAM and it's currently got an old AGP graphics card in - I think it's a Radeon 9800 Pro. Anyway, he's picked up another cinema display and wants to use 2 screens for editing however his gra
-
I want to convert the following XML file into the Table "EMP" & "DEPT". Is this possible in Application Express. <XML> <EMP> <NAME>ABC</NAME> <EMP_CODE>123</EMP_CODE> <DESIGNATION>EMG</DESIGNATION> </EMP> <DEPT> <DNAME>SALE</DNAME
-
Double-click won't open photo in new window
so double clicking a photo in my library used to open it in a separate window where i could have a closer look or edit the photo, only, now, when i double-click a photo, the separate window opens, flashes the photo for a millisecond and then appears