Serial IO performance advantages
Hi all-
I'm working on a database project right now (http://jdbm.sourceforge.net/) that needs to have the absolute fastest mechanism for streaming data onto a file (this is for use in a database log file).
The log file is going to be implemented as a write only ring buffer (e.g. we will write byte 0 through MAX_FILE_SIZE, then start over again with byte 0).
In most native file systems, one can specify that a set of IO operations is going to be for purely sequential writing. This allows the OS to disable some types of read-ahead buffering, optimize paging operations for write only, etc... and provides the absolutely fastest write times.
To my knowledge, Java doesn't really provide the developer with an option to say that file access will be purely sequential and write only - but I'm wondering if maybe there are things we can do to implicitly force the jvm to make the correct low level system calls?
I suspect that using a standard FileOutputStream would probably be a sufficient indicator - but we need to overwrite existing data without completely truncating the file (FOS will either kill the file and start fresh or append to the existing file, neither of which is suitable for a ring buffer).
Another possibility is the FileChannel.write(ByteBuffer) method. Maybe the combination of using a FileOutputStream, then getting the channel and periodically resetting the position to 0 would force things to be optimal for write only ring-buffering?
Does anyone have any experience with the performance characteristics of Java 's IO classes in this kind of situation?
Thanks in advance,
- K
That's a dangerous path because what works for a
certain JVM at a specific point in time, may not work
in another context.I definitely don't see it that way. If I can make our application have better performance under even some JREs then it's worth doing, even if the performance may be 'normal' under other JREs. There are huge chunks of NIO that are not at guaranteed to perform any performance bennefit unless the JRE has been optimized for them, but it's still worth doing...
Regardless, the question is not whether this is something to do, but rather if anyone has any experience with it. The lack of responses makes me think the answer is "no"...
- K
Similar Messages
-
Performance advantage to standalone OC4J versus App Server?
If one uses only the JMS server part of the 10g Application Server, would there be any performance advantage to running the standalone version of the OC4J JMS server versus running the entire application server?
Are you saying there is zero overhead associated with passing JMS messages through the application server?
Logic suggests a performance increase when a step is removed... -
Hi gurus,
with a customer we're working about a new implementation project.
The customer is working on SAP about some processes in the supplay chain, and he wants to dismiss a custom system that is used to tracking the life of some product.
The scope of SAP project is implement serial number management to be able to have all the informations about the life cycle of the product, from purchasing of the raw smart card, to the sale of the same card , with details about the history of manufacture made.
Before we implement the solution, we have to be sure that the SAP system can manage a high number of serial number, without problems about tables space and without performance problem in processes of purchasing, stock management and sales and distribution, both in user transactions, in reporting, programs running or mass change of serial numbers.
We estimate that we have to operate with 11 million of serial number in a year.
We've looked in OSS for finding a note regarding this topic, but we've found nothing about a similar problem.
Can someone please give a feedback about this issue?
Thanks in advance for your cooperation.
Best regards,
MarcoHi Marco,
Please refer the below link it may help you............
http://help.sap.com/saphelp_47x200/helpdata/en/dd/560d93545a11d1a7020000e829fd11/frameset.htm -
Serial Number Performance and DB Locking
Hi All,
Can anyone assist with an issue we are having with our SBO2007 PL 42...
There are 8 million Serial Numbers in the system with an average of 1 million Serial Numbers per Item (8 Items). - Table OSRI
There are 40 milion Transactions in the system with and aveage of 5 Transactions per Serial Number.
When the Serial Number Selection Screen opens in a Marketing Document, it slows the entire organisation down and it seems like there are Locks in the Database that are causing this.
Can anyone help me understand why this occurs and if there is any way of overcomming this "Serious" Issue?
Thanks.Hi,
Please close your duplicated thread.
Thanks,
Gordon -
Performance Advantages of Insert All
Hi,
Can you please provide a list of advantages of using INSERT ALL vs. INSERT.
regards,
DipankarCan you please provide a list of advantages of using INSERT ALL vs. INSERT.You can probably find out more frorm the on-line documentation or a good web search, but advantages (probably) include
* improved efficiency by avoiding context swtching
* one statement instead of multiple statements -
Hi Experts,
In this following shown report Data base performance is consuming , can you please help me in this issue where can i increase my performance. Please help me urgent.
Report need performance
Program Name : ZSD_QUOTE *
Functional Analyst : TOBY *
Programmer : Vijay Joseph *
Start date : 03/14/2007 (MM/DD/YYYY) *
Initial CTS : DEVK913353 *
Description : This program will generate the Quote detls *
Includes : None *
Function Modules : None *
Logical database : None *
Transaction Code : ZQUOTE *
External references : None *
Modification Log *
Date | Modified by | CTS number | Comments *
03/14/2007|Vijay Joseph | DEVK913353 |Initial Development *
REPORT ZSD_QUOTE
line-size 252
line-count 40(0)
no standard page heading . .
*Tables
TABLES : VBAK,
EQUI,
EKKO.
*TYPES
TYPES : BEGIN OF T_VBAP,
VBELN LIKE VBAK-VBELN,
ERDAT LIKE VBAK-ERDAT,
BNDDT LIKE VBAK-BNDDT,
NETWR LIKE VBAK-NETWR,
VKBUR LIKE VBAK-VKBUR,
BSTNK LIKE VBAK-BSTNK,
KUNNR LIKE VBAK-KUNNR,
POSNR LIKE VBAP-POSNR,
MATNR LIKE VBAP-MATNR,
PSTYV LIKE VBAP-PSTYV,
KWMENG LIKE VBAP-KWMENG,
VGBEL LIKE VBAP-VGBEL,
VGPOS LIKE VBAP-VGPOS,
WERKS LIKE VBAP-WERKS,
END OF T_VBAP.
*Types for the likp and lips
TYPES : BEGIN OF T_LIPS,
VBELN LIKE LIKP-VBELN,
LFDAT LIKE LIKP-LFDAT,
POSNR LIKE LIPS-POSNR,
PSTYV LIKE LIPS-PSTYV,
MATNR LIKE LIPS-MATNR,
WERKS LIKE LIPS-WERKS,
VGBEL LIKE LIPS-VGBEL,
VGPOS LIKE LIPS-VGPOS,
END OF T_LIPS.
*Types for the EQUI
TYPES : BEGIN OF T_EQUI,
EQUNR LIKE EQUI-EQUNR,
SERNR LIKE EQUI-SERNR,
KDAUF LIKE EQBS-KDAUF,
KDPOS LIKE EQBS-KDPOS,
END OF T_EQUI.
*Types for the KNA1
TYPES : BEGIN OF T_KNA1,
KUNNR LIKE KNA1-KUNNR,
NAME1 LIKE KNA1-NAME1,
END OF T_KNA1.
*Types for the MAKT
TYPES : BEGIN OF T_MAKT,
MATNR LIKE MAKT-MATNR,
MAKTX LIKE MAKT-MAKTX,
SPRAS LIKE MAKT-SPRAS,
END OF T_MAKT.
*types for VBFA
TYPES : BEGIN OF T_VBFA,
VBELV LIKE VBFA-VBELV,
POSNV LIKE VBFA-POSNV,
VBELN LIKE VBFA-VBELN,
POSNN LIKE VBFA-POSNN,
VBTYP_N LIKE VBFA-VBTYP_N,
END OF T_VBFA.
*types for the output
TYPES : BEGIN OF T_OUTPUT,
VBELV LIKE VBFA-VBELV,
ERDAT LIKE VBAK-ERDAT,
BNDDT LIKE VBAK-BNDDT,
NETWR(15) type C, " LIKE VBAK-NETWR,
VBELN LIKE VBAK-VBELN,
BSTNK LIKE VBAK-BSTNK,
KUNNR LIKE VBAK-KUNNR,
KWMENG(15) TYPE C, " LIKE VBAP-KWMENG,
NAME1 LIKE KNA1-NAME1,
VKBUR LIKE VBAK-VKBUR,
MATNR LIKE MAKT-MATNR,
MAKTX LIKE MAKT-MAKTX,
LFDAT LIKE LIKP-LFDAT,
SERNR LIKE EQUI-SERNR,
END OF T_OUTPUT.
*Types for the VBUP
TYPES : BEGIN OF T_VBUP,
vbeln LIKE VBUP-VBELN,
posnr LIKE VBUP-POSNR,
lfsta LIKE VBUP-LFSTA,
END OF T_VBUP.
*Internal Table
DATA : GIT_VBAP TYPE STANDARD TABLE OF T_VBAP,
GIT_LIPS TYPE STANDARD TABLE OF T_LIPS,
GIT_EQUI TYPE STANDARD TABLE OF T_EQUI,
GIT_KNA1 TYPE STANDARD TABLE OF T_KNA1,
GIT_MAKT TYPE STANDARD TABLE OF T_MAKT,
GIT_OUTPUT TYPE STANDARD TABLE OF T_OUTPUT,
GIT_VBUP TYPE STANDARD TABLE OF T_VBUP,
GIT_VBFA TYPE STANDARD TABLE OF T_VBFA.
*work Area
DATA : GWA_VBAP TYPE T_VBAP,
GWA_LIPS TYPE T_LIPS,
GWA_EQUI TYPE T_EQUI,
GWA_KNA1 TYPE T_KNA1,
GWA_MAKT TYPE T_MAKT,
GWA_OUTPUT TYPE T_OUTPUT,
GWA_VBUP TYPE T_VBUP,
GWA_VBFA TYPE T_VBFA.
*selection screen.
SELECTION-SCREEN : BEGIN OF BLOCK ZBLOCK WITH FRAME TITLE TEXT-015.
Select-options : S_VBELN FOR VBAK-VBELN,
S_ERDAT FOR VBAK-ERDAT, " OBLIGATORY,
S_EBELN FOR EKKO-EBELN MATCHCODE OBJECT MEKK,
S_SERNR FOR EQUI-SERNR MATCHCODE OBJECT EQSN.
PARAMETERS : P_WERKS LIKE VBAP-WERKS OBLIGATORY.
SELECTION-SCREEN : END OF BLOCK ZBLOCK.
**************top of page*********************************************
TOP-OF-PAGE.
PERFORM SAPSD_TOP_OF_PAGE.
**************At selection screen*************************************
at selection-screen.
*for validating the Sales Order
PERFORM SAPSD_SCREEN_VALIDATION_VBELN.
*for validating the plant
PERFORM SAPSD_SCREEN_VALIDATION_WERKS.
*for the validating the PO number
PERFORM SAPSD_SCREEN_VALIDATION_PO.
*for the validating the serial number
PERFORM SAPSD_SCREEN_VALIDATION_SERIAL.
***************strart of selection************************************
START-OF-SELECTION.
*Get the data
PERFORM SAPSD_FETCH_DATA.
*For the final output table
PERFORM SAPSD_OUTPUT.
*& Form SAPSD_FETCH_DATA
text
--> p1 text
<-- p2 text
FORM SAPSD_FETCH_DATA .
*FETCH FROM THE VBAK AND VBAP.
SELECT VBAK~VBELN
VBAK~ERDAT
VBAK~BNDDT
VBAK~NETWR
VBAK~VKBUR
VBAK~BSTNK
VBAK~KUNNR
VBAP~POSNR
VBAP~MATNR
VBAP~PSTYV
VBAP~KWMENG
VBAP~VGBEL
VBAP~VGPOS
VBAP~WERKS
FROM VBAK INNER JOIN VBAP
ON VBAKVBELN EQ VBAPVBELN
INTO TABLE GIT_VBAP
WHERE VBAK~VBELN IN S_VBELN
AND VBAK~ERDAT IN S_ERDAT
AND VBAK~BSTNK IN S_EBELN
AND VBAP~PSTYV EQ 'IRRA'
AND VBAP~WERKS EQ P_WERKS.
IF SY-SUBRC EQ 0.
SORT GIT_VBAP BY VBELN.
else.
message e022(z1).
ENDIF.
*from vbfa
select VBELV
POSNV
VBELN
POSNN
VBTYP_N
into table git_vbfa
from vbfa
for all entries in git_vbap
where vbelv eq git_vbap-vbeln
and posnv eq git_vbap-posnr.
*FETCH DATA FROM THE LIKP AND LIPS
IF NOT GIT_VBAP IS INITIAL.
SELECT LIKP~VBELN
LIKP~LFDAT
LIPS~POSNR
LIPS~PSTYV
LIPS~MATNR
LIPS~WERKS
LIPS~VGBEL
LIPS~VGPOS
FROM LIKP INNER JOIN LIPS
ON LIKPVBELN EQ LIPSVBELN
INTO TABLE GIT_LIPS
FOR ALL ENTRIES IN GIT_VBFA
WHERE LIPS~VBELN EQ GIT_VBFA-VBELN
and LIPS~POSNR EQ GIT_VBFA-POSNN.
AND LIPS~WERKS EQ GIT_VBAP-WERKS.
AND LIPS~MATNR EQ GIT_VBAP-MATNR.
AND LIPS~POSNR EQ GIT_VBAP-POSNR.
AND LIPS~PSTYV EQ 'IRRA'.
AND LIPS~VGPOS EQ GIT_VBAP-POSNR.
IF SY-SUBRC EQ 0.
SORT GIT_LIPS BY VBELN.
ENDIF.
ENDIF.
*for getting the delivery status(dont take the delivered document number
*take only 'open'.
if not git_lips is initial.
select VBELN
posnr
lfsta
from vbup
into table git_vbup
for all entries in git_lips
where vbeln eq git_lips-vbeln
and posnr eq git_lips-posnr.
and ( lfsta EQ 'A' ) OR
( lfsta EQ 'B' ) .
if sy-subrc eq 0.
sort git_vbup by vbeln.
endif.
endif.
*To get the equipment number
IF NOT GIT_VBAP IS INITIAL.
SELECT EQUI~EQUNR
EQUI~SERNR
EQBS~KDAUF
EQBS~KDPOS
FROM EQUI INNER JOIN EQBS
ON EQUIEQUNR EQ EQBSEQUNR
INTO TABLE GIT_EQUI
FOR ALL ENTRIES IN GIT_VBAP
WHERE EQUI~SERNR IN S_SERNR
AND EQBS~KDAUF EQ GIT_VBAP-VBELN.
IF SY-SUBRC EQ 0.
SORT GIT_EQUI BY EQUNR.
ENDIF.
ENDIF.
*To get the customer name
IF NOT GIT_VBAP IS INITIAL.
SELECT KUNNR
NAME1
INTO TABLE GIT_KNA1
FROM KNA1
FOR ALL ENTRIES IN GIT_VBAP
WHERE KUNNR EQ GIT_VBAP-KUNNR.
IF SY-SUBRC EQ 0.
SORT GIT_KNA1 BY KUNNR.
ENDIF.
ENDIF.
*to get the material number
if not git_vbap is initial.
SELECT MATNR
MAKTX
SPRAS
INTO TABLE GIT_MAKT
FROM MAKT
FOR ALL ENTRIES IN GIT_VBAP
WHERE MATNR EQ GIT_VBAP-MATNR
AND SPRAS EQ SY-LANGU.
IF SY-SUBRC EQ 0.
SORT GIT_MAKT BY MATNR.
ENDIF.
endif.
ENDFORM. " SAPSD_FETCH_DATA
*& Form SAPSD_OUTPUT
text
--> p1 text
<-- p2 text
FORM SAPSD_OUTPUT .
data : l_vbelv like vbfa-vbelv.
LOOP AT GIT_VBAP INTO GWA_VBAP.
*for getting the delivey date
clear : gwa_lips.
read table git_vbfa into gwa_vbfa with key vbelv = gwa_vbap-vbeln
posnv = gwa_vbap-posnr.
if sy-subrc eq 0.
read table git_lips into gwa_lips
with key VBELN = GWA_vbfa-Vbeln
POSNR = GWA_vbfa-posnn
PSTYV = 'IRRA'.
IF SY-SUBRC EQ 0.
GWA_OUTPUT-LFDAT = GWA_LIPS-LFDAT.
READ TABLE GIT_VBUP INTO GWA_VBUP
WITH KEY VBELN = GWA_LIPS-VBELN
POSNR = GWA_LIPS-POSNR.
IF SY-SUBRC EQ 0.
IF GWA_VBUP-LFSTA EQ 'A' OR GWA_VBUP-LFSTA EQ 'B'.
clear : l_vbelv.
select single vbelv
into l_vbelv
from vbfa
where VBELN EQ gwa_vbap-vbeln.
*Quote Number
if sy-subrc eq 0.
GWA_OUTPUT-VBELV = L_VBELV.
endif.
*Move the details to the final table
GWA_OUTPUT-VBELN = GWA_VBAP-VBELN.
GWA_OUTPUT-ERDAT = GWA_VBAP-ERDAT.
GWA_OUTPUT-BNDDT = GWA_VBAP-BNDDT.
GWA_OUTPUT-NETWR = GWA_VBAP-NETWR.
GWA_OUTPUT-KUNNR = GWA_VBAP-KUNNR.
GWA_OUTPUT-KWMENG = GWA_VBAP-KWMENG.
GWA_OUTPUT-BSTNK = GWA_VBAP-BSTNK.
for getting the name from kna1
CLEAR : GWA_KNA1.
READ TABLE GIT_KNA1 INTO GWA_KNA1
WITH KEY KUNNR = GWA_VBAP-KUNNR.
IF SY-SUBRC EQ 0.
GWA_OUTPUT-NAME1 = GWA_KNA1-NAME1.
ENDIF.
GWA_OUTPUT-VKBUR = GWA_VBAP-VKBUR.
*for getting mateial number and description
CLEAR : GWA_MAKT.
READ TABLE GIT_MAKT INTO GWA_MAKT
WITH KEY MATNR = GWA_VBAP-MATNR
SPRAS = SY-LANGU.
IF SY-SUBRC EQ 0.
GWA_OUTPUT-MATNR = GWA_MAKT-MATNR.
GWA_OUTPUT-MAKTX = GWA_MAKT-MAKTX.
ENDIF.
for getting the serial number
clear : gwa_equi.
read table git_equi into gwa_equi
with key kdauf = gwa_vbap-vbeln
kdpos = gwa_vbap-posnr.
IF SY-SUBRC EQ 0.
GWA_OUTPUT-SERNR = gwa_equi-sernr.
ENDIF.
append gwa_output to git_output.
ENDIF.
ENDIF.
ENDIF.
CLEAR : GWA_VBAP,
GWA_OUTPUT.
ENDLOOP.
*free and refres the internal table
clear : git_vbap,
git_lips,
git_makt,
git_equi.
refresh : git_vbap,
git_lips,
git_makt,
git_equi.
free: git_vbap,
git_lips,
git_makt,
git_equi.
loop at git_output into gwa_output.
FORMAT COLOR COL_NORMAL INTENSIFIED OFF INVERSE OFF.
WRITE : /1 sy-vline,
2 gwa_output-VBELV, "qte no
13 sy-vline,
14 gwa_output-ERDAT, "cr date
25 sy-vline,
26 gwa_output-BNDDT, "exp date
36 sy-vline,
37 gwa_output-NETWR, "qte value
53 sy-vline,
54 gwa_output-VBELN, "so
65 SY-VLINE,
66 gwa_output-BSTNK, "po
87 SY-VLINE,
88 gwa_output-KUNNR, "customer
99 SY-VLINE,
100 gwa_output-NAME1, "Name
136 sy-vline,
137 gwa_output-VKBUR, "S off
142 sy-vline,
143 gwa_output-MATNR, "Material
162 sy-vline,
163 gwa_output-MAKTX , "Description
204 sy-vline,
205 gwa_output-KWMENG, "Or Qty
221 sy-vline,
222 gwa_output-LFDAT, "Del Date
233 sy-vline,
234 gwa_output-SERNR, "Serial No
252 SY-VLINE.
uline.
clear : gwa_output.
endloop.
*free and refresh the internal table
refresh : git_output.
free : git_output.
ENDFORM. " SAPSD_OUTPUT
*& Form SAPSD_TOP_OF_PAGE
text
--> p1 text
<-- p2 text
FORM SAPSD_TOP_OF_PAGE .
write: /15 text-016, 30 sy-repid.
FORMAT COLOR COL_HEADING INTENSIFIED ON INVERSE OFF.
ULINE.
WRITE : /1 sy-vline,
2 text-001, "QTE No
13 sy-vline,
14 text-002, "CR Date
25 sy-vline,
26 text-003, "EX Date
36 sy-vline,
37 text-004, "QT Value
53 sy-vline,
54 text-005, "SO
65 SY-VLINE,
66 text-006, "PO
87 SY-VLINE,
88 text-007, "Customer
99 sy-vline,
100 text-008, "Name
136 sy-vline,
137 text-009, "S off
142 sy-vline,
143 text-010, "Material
162 sy-vline,
163 text-011 , "Description
204 sy-vline,
205 text-012, "Or Qty
221 sy-vline,
222 text-013, "Del Date
233 sy-vline,
234 text-014, "Serial No
252 SY-VLINE.
ULINE.
ENDFORM. " SAPSD_TOP_OF_PAG,
*& Form SAPSD_SCREEN_VALIDATION_VBELN
text
--> p1 text
<-- p2 text
FORM SAPSD_SCREEN_VALIDATION_VBELN .
IF NOT S_VBELN IS INITIAL.
*To check the plant.If entry is wrong the an error message displayed.
DATA : l_VBELN LIKE VBAK-VBELN. "SO
Validating SO in selection screen
SELECT SINGLE VBELN INTO l_VBELN FROM VBAK
WHERE VBELN IN S_VBELN.
IF sy-subrc NE 0.
MESSAGE e023(Z1). " Invalid SO
ENDIF.
endif.
ENDFORM. " SAPSD_SCREEN_VALIDATION_VBELN
*& Form SAPSD_SCREEN_VALIDATION_WERKS
text
--> p1 text
<-- p2 text
FORM SAPSD_SCREEN_VALIDATION_WERKS .
IF NOT P_WERKS IS INITIAL.
*To check the plant.
*If entry is wrong the an error message displayed.
DATA : l_WERKS LIKE T001W-WERKS. "Plant
Validating Plant in selection screen
SELECT SINGLE WERKS INTO l_WERKS FROM T001W
WHERE WERKS EQ P_WERKS.
IF sy-subrc NE 0.
MESSAGE e024(Z1). " Invalid Plant
ENDIF.
ENDIF.
ENDFORM. " SAPSD_SCREEN_VALIDATION_WERKS
*& Form SAPSD_SCREEN_VALIDATION_PO
text
--> p1 text
<-- p2 text
FORM SAPSD_SCREEN_VALIDATION_PO .
IF NOT S_EBELN IS INITIAL.
*To check the plant.
*If entry is wrong the an error message displayed.
DATA : l_EBELN LIKE EKKO-EBELN. "PO
Validating PO in selection screen
SELECT SINGLE EBELN INTO l_EBELN FROM EKKO
WHERE EBELN IN S_EBELN.
IF sy-subrc NE 0.
MESSAGE e025(Z1). " Invalid PO
ENDIF.
ENDIF.
ENDFORM. " SAPSD_SCREEN_VALIDATION_PO
*& Form SAPSD_SCREEN_VALIDATION_SERIAL
text
--> p1 text
<-- p2 text
FORM SAPSD_SCREEN_VALIDATION_SERIAL .
IF NOT S_SERNR IS INITIAL.
*To check the SERIAL NO.
*If entry is wrong the an error message displayed.
DATA : l_SERNR LIKE EQUI-SERNR. "Serial No
Validating Serial NO in selection screen
SELECT SINGLE SERNR INTO l_SERNR FROM EQUI
WHERE SERNR IN S_SERNR.
IF sy-subrc NE 0.
MESSAGE e026(Z1). " Invalid Serial No
ENDIF.
ENDIF.
ENDFORM. " SAPSD_SCREEN_VALIDATION_SERIAL
Please help me in this .
Thanks & Regards
AhammadHi Shaik,
Please remove all the join select queries and use 'for all entries' varaiant of the select query. Check whether you can create and use indexes in ur queries.
Thanks and Regards,
Saurabh Chhatre -
Front-end HTTP Server and Performance with .jspx pages?
This is more of a general question that I'm looking for validation:
If the majority of our website is implemented as .jspx pages, with very few straight HTML pages, is there benefit in deploying to an environment with a separate HTTP front-end web server and back-end Application server (java container)? For example, I'm deploying to Tomcat as both the HTTP server and Java Application server for the .jspx pages; is there a performance advantage in deploying to an Apache HTTP server with a connector to Tomcat if I'm primarily serving up .jspx pages? I'm not as familiar with Oracle AS architecture, so my question is primarily around Tomcat deployment.
thanksThis is more of a general question that I'm looking for validation:
If the majority of our website is implemented as .jspx pages, with very few straight HTML pages, is there benefit in deploying to an environment with a separate HTTP front-end web server and back-end Application server (java container)? For example, I'm deploying to Tomcat as both the HTTP server and Java Application server for the .jspx pages; is there a performance advantage in deploying to an Apache HTTP server with a connector to Tomcat if I'm primarily serving up .jspx pages? I'm not as familiar with Oracle AS architecture, so my question is primarily around Tomcat deployment.
thanks -
RAID-5 configuration and performance
Hi,
About three years ago, we had a disk drive fail so after the long delayed, repair, I reconfigured the data disk drives into one RAID-5 metadata drive:
$ cat /etc/lvm/md.cf
# metadevice configuration file
# do not hand edit
d45 -r c1t1d0s0 c1t2d0s0 c1t3d0s0 c1t4d0s0 c1t5d0s0 c1t9d0s0 c1t10d0s0 -k -i 256b
(256b -> 256 blocks or 128k bytes)
I went RAID-5 because I wanted the next disk failure to keep the application up until we could schedule a repair activity . . . not fail the application hard and make a crisis d'jour. But at the time, I was working fast and did not realize the '256b' was larger than the cache size of the existing disk drives, the smallest disk cache in the array is just over 105k bytes. Still the array built fine and works in our application. It is 'fast enough' until I try to load a backup . . . then it performs like a dog. It is the Oracle database restore that takes too long.
I suspect my large stripe/interlace size has cut the number of hardware cache buffers in half, from 64 to 32 and this would explain the poor restore performance. I am looking at alternatives now that 1TB USB drives are in the petty cash range but I work on a government contract and money is tight, which leads to these questions:
UFS blocksize is 8192 - is there a known, optimum RAID-5 "interlace" size that is an integer multiple of the UFS block size?
Larger blocksizes means longer data transfer times to and from cache but compared to rotational delays, this is modest. Is there a credible, RAID-5 performance model for random read/writes to a RAID-5 array that uses rotational delay, data transfer speed, and seek delays for an optimum solution?
Are there Solaris tools that might give us insights to disk-layer, SCSI commands being used? Something for SCSI like 'snoop' is for network traffic.
The disk drives appear to have an option to use the cache for read-modify-write instead of physically hitting the same track over and over again. Are there Solaris tools that would allow us to use basic level SCSI commands to reconfigure the disk drives for more intense cache operation during the restore and return to default performance during normal operation?
My current thinking is to take the system down for a series of benchmarks using a semi-log scale "stripe/interlace" of "n" multiples of 8192, the UFS blocksize:
1*8192 -> the smallest tested only if n=2-5 suggests n=2 might be faster than any others
2*8192 -> second test, matches some examples in the documentation
5*8192 -> first test (I have some smaller backups to use that should run within a day)
10*8192 -> third test, determines if there is any reason to go larger
13*8192 -> is the largest that would fit in the smallest disk drive, cache
Based upon these benchmarks, I would test the range between the two fastest to see if there is a multiple of 8192 that shows a performance advantage. However, these tests take time, measured in days. I'm OK with taking the down time but wanted to ask the community if anyone has a better plan or insights to share.
This is not a 'hair on fire' problem as I will schedule the RAID-5 benchmarks for late May. But I thought I'd ask the community first.
Thanks,
Bob Wilson
ps. This forum software seems to be at war with our government e-mail system. I will check back but you can also send a note to [email protected].
Solved, it turned out to be a setting in BIOS that I overlooked, drive is now in use. -
What are the advantages of using ENC for a datasource
Other than eliminating a tight binding between a EJB and a datasource , what are
the other considerations and/or advantages of using JNDI ENC to define a datasource
reference vs a direct lookup of a Datasource in the EJB implmentation?
For example, are their performance advantages by the datasource being cached in
JNDI ENC?Hello Bryan,
Basically the main advantage is to not have the tight coupling between the EJB
and the particular JNDI name that it's bounded to. Please refer to a previous
post where I addressed this issue (perhaps a few days ago on this newsgroup).
In addition, you will always gain performance improvements by caching the remote/local
home objects for your EJBs.
Best regards,
Ryan LeCompte
[email protected]
http://www.louisiana.edu/~rml7669
"Bryan Boyer" <[email protected]> wrote:
>
Other than eliminating a tight binding between a EJB and a datasource
, what are
the other considerations and/or advantages of using JNDI ENC to define
a datasource
reference vs a direct lookup of a Datasource in the EJB implmentation?
For example, are their performance advantages by the datasource being
cached in
JNDI ENC? -
Advantages of Upgrading Weblogic 8.1 sp5 to 10.3
Hi
I am very much novice to the weblogic.We are planning to upgrade the weblogic 8.1sp5 version to 10.3.We need to convince the clients regarding certain issues with respect ti the upgrade.Say performance, what are the performance advantages of upgrading the weblogic from 8.1 to 10.3.
Could any body list out some major performance advantages of moving the weblogic 8.1 to 10.3As a reference for performance, you can look at the Capacity Planning Guides for both versions of the product and do your own comparison. Note however that hardware changes between these two points in time are quite extreme.
[WLP 8.1 Capacity Planning Guide|http://edocs.bea.com/wlp/docs81/capacityplanning/index.html]
[ WLP 10.3 Capacity Planning Guide|http://download.oracle.com/docs/cd/E13155_01/wlp/docs103/capacityplanning/index.html]
Note that there are many other reasons to upgrade, including vast improvements in functionality and the short time horizon for continued support of WLP 8.1. I don't know when support will end for that release, but it's coming in the near to mid term.
Brad -
Oracle 10g Enterprise vs Standard Edition performance
For a database hosted on a single machine is there any performance advantage in using Oracle 10g Enterprise Edition over Standard Edition? For our application we do not require data warehousing or OLAP, just the ability to store and query a large amount of XML as CLOBs or using sturctured mapping.
I have looked at online documentation but have not been able to find the answer. Any advice would be appreciated.Probably not. Unless there is some EE feature you can leverage, there probably won't be a performance difference.
Justin -
Is it more efficient to use a dynamic VI or utilize the VI Call Configuration Dialog Box which apparently can perform the same function? I realize that there are restrictions on using the VI Call Configuration Dialog Box, however, if my scenario doesn't concern the restrictions, why would I want to go thru the trouble of creating a dynamic VI when I could simply click on the VI of interest and configure from a menu? Are there performance advantages? Thanks in advance!
Generally, I wouldn't recommend playing with the call setup dialog at all (for those who don't know it, you can get to it by right clicking a subVI in the BD). By default, VIs are configured to load with callers and that's the correct options for almost all static VIs. The Open VI Reference primitive has multiple advantages:
It allows you to select different VIs dynamically.
It allows you to spawn multiple copies of reentrant VIs.
It allows you to perform asynch runs (although I think that this is something that should actually be available through the call setup dialog).
It allows you to open references to VIs in other application instances.
In the rare cases where you do want the same functionality that the call setup dialog gives you, it doesn't hide it.
Try to take over the world! -
Level Based vs. Value Based hier: advantages, disadvantages and limitations
Could someone give an overview about the advantages, disadvantages and limitations when comparing OLAP Level to Value based hierarchies?
Thanks,
MarcioOLAP can handle both types of hierarchies. No performance advantage using one over the other (while loading cube or querying).
If you are "pushing" dimensional-security inside OLAP, in that case also it does not matter.
Cube-based MVs (with query-rewrite) can only be created if the hierarchies are level-based. Are you going to need that?
Generally there are other "non-olap" factors:
(#1). Which reporting tool will be used. Does it provide better reporting capabilities if the source is parent-child or level-based?
(#2). Are there any reporting requirements that will need "level" information when selecting data.
(#3). Are there are any security requirements that will be handled in "reporting-layer" which will need level-based information.
Although in case of (#1) and (#2), you can "expose" an olap's parent-child hierarchy as "level-based" hierarchy to the reporting tool using internal GID (grouping-id) information.
OBIEE 11.1.1.5 (and future versions) works well with OLAP metadata, and with any type of hierarchy in olap.
For other reporting tools, you have to see what features are available.
In short, its the source system, the reporting tool and the reporting requirements that will dictate what type of hierarchy should be stored in OLAP. -
Performance: JDBC vs. Native Oracle
Which type of connection should one use? Are there any
performance advantages one over the other? Or does anyone know
where I can get this type of information?While off the top of my head I can't directly answer your question I can tell you that SQL loader is not the faster tool in the world. Its not really any faster than running a sql script from SQL plus.
If your JDBC process is running locally then you could look at using OCI8 instead of JDBC driver. You should certainly look at batch processing. I can't see your performance loss being significant. Depending on how much data is actually being processed you might even have a performance gain due to connection creation overhead.
Ted. -
Photoshop CS6 Performance Settings Recommendations?
I use Photoshop CS6 for Image editing.
I just purchased a new Imac 27 inch late 2012 Model with an i7 3.4Ghz Processor plus the following:
1TB Fusiion HDD
24Gb of System Ram
2GB Video Card.
This screenshot shows what I currenlty have as my setttings in the performance pane.
I'd like if possible to maximize the performace given my system spec.
Are these settings I have now optimum?
I have my scratch disc currently set to be my MacHDD
I do have a FW800 Aux1 which is not being used that has 900GB of free space.
Should I use it also?
Any suggestions on performace would be appreciatedFirst, without describing the kind of Photoshop work you do, no one is going to be able to do more than guess wildly at how to set up your system. There are some general guidelines, some of which C.P. has stated. Do you edit huge documents? Long editing sessions? Bunches of documents at once?
Photoshop does not use multiple scratch disks simultaneously as far as I can tell. It's more of a "this one ran out of space, let me fall back to the next entry in the list" kind of thing. So there's no real performance advantage to specifying two drives as far as I can see - just convenience if you have limited space.
When pushing up against limits, it's good to allow the OS to have full access to the system HDD (e.g., for its own swapping) while Photoshop has concurrent access to its own, separate scratch drive. This way the I/O activity of both doesn't interfere and cause thrashing (lots of seeking, greatly reduced throughput).
What's a "Fusiion HDD"? I ask because C.P.'s "don't use the system drive for scratch" advice, while right on point with a standard HDD, may not be as pertinent with a drive that has near-SSD performance via an SSD caching scheme. Since SSDs have near-zero latency (no seeking), and usually quite a lot more throughput capacity than HDDs, it's less important to keep the OS and Photoshop from interferining with one another.
With 24GB of RAM you're probably not having to do a terribly large amount of scratch file writes/reads anyway. Are you experiencing slowdowns?
-Noel
Maybe you are looking for
-
Why is apple taking so long to ship the new iphone 5
why is the iphone five taking so long to ship
-
Dynamic data source in Excel Pivot Table
Hello there, I am trying to have dynamic data source in pivot table using INDIRECT but getting the error "Reference not valid". Here is how I setup the reference: Named range: ConsolLink = "'R:\Root\Sub1\Sub2\Consol File.xlsm'!Source_OpexConsol" "Sou
-
conversion error occured while using h:selectmanyListbox tag.
-
ITunes app redirection to review in app store
Anyone else having a problem with iTunes redirecting you to a submit a review screen in the app store? Went ahead and submitted a review, but no luck. Did a hard reset and still no luck. By the comments on the review page, it appears I am not the onl
-
Don't see airplay icon on ipad
When I swipe my multitask tray on iPad 3, I see the audio controls but no Airplay icon. I've never used Airplay but want to use it to mirror my screen to my computer. Why isn't the icon appearing? The iPad is connecting wirelessly to a network bein