Save instances in a buffer
Hello everybody, I have to read from a file 500k instances from a file and the put them into a buffer. My solution at this task is to save all the instances in one string and the put the string into the buffer. Unfortunately this solution is very slow....
Could you give ma a faster solution using other data structure.
Thanks
I have to read from a file 500k instances from a file and the put them into a bufferWhat is a "buffer"?
Why can't you add the text directly to the buffer? Appending 500K instance of text to a String is incredibly inefficient since a new String is created every time. You could use a StringBuffer or StringBuilder.
Similar Messages
-
Mapping instance variables to buffer in external format
I was inspired by this OP http://forum.java.sun.com/thread.jspa?threadID=5211760 to develop my first annotation. Having no experience
at all with annotations, I would like to ask you if it is something useful or
an abuse of annotations. I have my code at work and It works, but now I am at home, so here is the idea.
class TestByteMap
@ByteMap.Offset(0) byte aByte;
@ByteMap.Offset(1) int aInt;
@ByteMap.Offset(5) short aShort;
public static void main( String[] args )
byte[] buffer = new byte[] {1, 2, 3, 4, 5, 6, 7};
ByteMap bm = new ByteMap( new TestByteMap() );
bm.fromBytes( buffer ); // Populate the annotated variables from byte array
bm.toBytes( buffer ); // Dump annotated variables to byte array
}ByteMap is a class that has an inner annotation Offset. The toBytes()
and fromBytes() methods use reflection to move bytes between the
buffer and the annotated variables.
Thank you for any feedback!
Message was edited by:
baftosIf you need to read/write a class to a byte array, using annotations to mark which fields correspond to which offsets is somewhat better than tracking this by hand in comments. Instead of using reflection, you could mark the fields as before and use an annotation processor at compile time to generate the read/write methods appropriate for the class. In any case, it would be recommended to check for overlaps and other semantic errors in the offset annotations.
-
Not able to save order after action execution
Hi ,
Im facing a peculiar problem where the order is not saved and it runs for ever. What i did..
i have a requirement to set the status of order to complete when it reaches the zorder_close_date (i created a date rule for this ). Hence i configured an action profile with action with method COMPLETE_DOCUMENT and put start condition current date = zorder_close_date.. I have put the processing as immediate.
when i test this, the action is executed successfully , but when i click on save it runs for ever and dont save.
I really have no clue why it happens like this, any inputs?
Thanks,
ShaikHi Shaik,
You are Executing the Method Call via Action Profile and Changing the Status of the Order.
Since when your Action Executes it Goes in a Loop and doesnot allow you to Save the Order as i get..Have you Tried to Manually change the Status of the Order and save without Execution of the Action..This shall justify you weathe their is a Problem in your Order_Save or your Method_Call.
Also try Setting Processign Time as "1.Process using Selection Report" and Select the Checkbox Schedule Automatically in your Action Defination and check it out if their is a Problem with Action Condition then it shall Present with a Failed Action Processing in your Transaction then.
Also Check your Loops in the Method Call System as understood is not allowing to Save..System on the Status Change instance is going in a Loop and is not able to Exit Loop for Execution of Order Save Instance and so its Directly Exiting.
Also You may Check the Links for Some Help in Action:
http://help.sap.com/saphelp_crm50/helpdata/en/83/785141eb54ba5fe10000000a155106/frameset.htm
http://help.sap.com/saphelp_crm50/helpdata/en/43/ce9370010f01b4e10000000a11466f/frameset.htm
http://help.sap.com/saphelp_crm50/helpdata/en/51/0302403D62C442E10000000A1550B0/content.htm
Hope it Answered your Queries...
Revert Back for Further Clarification..
Thanks and Regards,
RK.
Added Help Links for Help -
STATSPACK REPORT (BUFFER HIT RATIO)
my statspack report shows that my buffer ration is 83%...what factors i need to look to imporve the buffer hit ratio. Thanks
I deleted because i realized that i took the statspack report of 1 day period.
Below is the Statspack report of 1 hour. Can you please let me know if i still need to increase database buffer cache?
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
4254163 TEST1 1 28-Jun-07 23:30 10.2.0.3.0 NO
Host Name: Linux3 Num CPUs: 2 Phys Memory (MB): 7,968
~~~~
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- -------------------
Begin Snap: 32 03-Jul-07 11:59:13 23 11.0
End Snap: 42 03-Jul-07 14:07:33 26 11.3
Elapsed: 128.33 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 100M Std Block Size: 8K
Shared Pool Size: 100M Log Buffer: 33,823K
Load Profile Per Second Per Transaction
~~~~~~~~~~~~ --------------- ---------------
Redo size: 1,259.57 8,598.13
Logical reads: 148.39 1,012.92
Block changes: 6.41 43.76
Physical reads: 41.91 286.09
Physical writes: 0.73 5.02
User calls: 15.66 106.91
Parses: 4.07 27.77
Hard parses: 0.27 1.85
Sorts: 1.70 11.61
Logons: 0.01 0.07
Executes: 9.59 65.47
Transactions: 0.15
% Blocks changed per Read: 4.32 Recursive Call %: 83.09
Rollback per transaction %: 6.03 Rows per Sort: 11.39
Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 71.77 In-memory Sort %: 100.00
Library Hit %: 93.15 Soft Parse %: 93.34
Execute to Parse %: 57.58 Latch Hit %: 100.00
Parse CPU to Parse Elapsd %: 97.12 % Non-Parse CPU: 86.74
Shared Pool Statistics Begin End
Memory Usage %: 91.37 92.38
% SQL with executions>1: 77.55 80.43
% Memory for SQL w/exec>1: 83.11 84.69
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
CPU time 132 48.3
db file sequential read 89,745 91 1 33.4
db file scattered read 29,289 35 1 13.0
control file parallel write 2,558 6 2 2.1
log file parallel write 2,294 3 1 1.0
Host CPU (CPUs: 2)
~~~~~~~~ Load Average
Begin End User System Idle WIO WCPU
0.11 0.11 2.26 2.65 95.09 0.90 0.24
Instance CPU
~~~~~~~~~~~~
% of total CPU for Instance: 1.06
% of busy CPU for Instance: 21.63
%DB time waiting for CPU - Resource Mgr:
Memory Statistics Begin End
~~~~~~~~~~~~~~~~~ ------------ ------------
Host Mem (MB): 7,967.6 7,967.6
SGA use (MB): 316.0 316.0
PGA use (MB): 57.8 62.6
% Host Mem used for SGA+PGA: 4.7 4.8
Time Model System Stats DB/Inst: TEST1/TEST1 Snaps: 32-42
-> Ordered by % of DB time desc, Statistic name
Statistic Time (s) % of DB time
sql execute elapsed time 212.3 92.7
DB CPU 124.2 54.2
parse time elapsed 21.6 9.4
hard parse elapsed time 19.7 8.6
PL/SQL execution elapsed time 4.3 1.9
hard parse (sharing criteria) elaps 1.4 .6
connection management call elapsed 1.4 .6
PL/SQL compilation elapsed time 1.2 .5
repeated bind elapsed time 0.1 .0
hard parse (bind mismatch) elapsed 0.1 .0
sequence load elapsed time 0.0 .0
DB time 228.9
background elapsed time 48.2
background cpu time 39.3
Wait Events DB/Inst: TEST1/TEST1 Snaps: 32-42
-> s - second, cs - centisecond, ms - millisecond, us - microsecond
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
db file sequential read 89,745 0 91 1 79.6
db file scattered read 29,289 0 35 1 26.0
control file parallel write 2,558 0 6 2 2.3
log file parallel write 2,294 0 3 1 2.0
db file parallel write 2,179 0 3 1 1.9
log file sync 1,089 0 2 2 1.0
os thread startup 7 0 1 120 0.0
latch free 3 0 0 89 0.0
SQL*Net break/reset to client 640 0 0 0 0.6
direct path read 140 0 0 1 0.1
control file sequential read 3,599 0 0 0 3.2
SQL*Net more data to client 2,121 0 0 0 1.9
db file parallel read 49 0 0 1 0.0
cursor: pin S wait on X 2 100 0 16 0.0
read by other session 4 0 0 5 0.0
direct path write 24 0 0 0 0.0
latch: shared pool 1 0 0 2 0.0
SQL*Net message from client 120,211 0 47,282 393 106.6
wait for unread message on broadc 7,631 100 7,517 985 6.8
Streams AQ: waiting for messages 1,540 100 7,512 4878 1.4
Streams AQ: qmn slave idle wait 275 0 7,508 27302 0.2
Streams AQ: qmn coordinator idle 554 51 7,508 13553 0.5
Streams AQ: waiting for time mana 25 52 6,643 ###### 0.0
SQL*Net message to client 120,215 0 0 0 106.6
class slave wait 7 0 0 1 0.0
SQL*Net more data from client 146 0 0 0 0.1
Background Wait Events DB/Inst: TEST1/TEST1 Snaps: 32-42
-> %Timeouts: value of 0 indicates value was < .5%. Value of null is truly 0
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg
%Time Total Wait wait Waits
Event Waits -outs Time (s) (ms) /txn
control file parallel write 2,557 0 6 2 2.3
log file parallel write 2,290 0 3 1 2.0
db file parallel write 2,179 0 3 1 1.9
os thread startup 7 0 1 120 0.0
db file sequential read 1,456 0 1 0 1.3
db file scattered read 25 0 0 8 0.0
control file sequential read 156 0 0 0 0.1
latch: shared pool 1 0 0 2 0.0
rdbms ipc message 25,017 92 59,496 2378 22.2
pmon timer 2,576 100 7,513 2917 2.3
Streams AQ: qmn slave idle wait 275 0 7,508 27302 0.2
Streams AQ: qmn coordinator idle 554 51 7,508 13553 0.5
smon timer 26 96 7,148 ###### 0.0
Streams AQ: waiting for time mana 25 52 6,643 ###### 0.0
Wait Event Histogram DB/Inst: TEST1/TEST1 Snaps: 32-42
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)
Total ----------------- % of Waits ------------------
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
LGWR wait for redo copy 7 100.0
SQL*Net break/reset to cli 640 99.2 .6 .2
SQL*Net more data to clien 2121 100.0
control file parallel writ 2558 84.2 12.0 .7 1.4 1.5 .2
control file sequential re 3599 99.9 .1
cursor: pin S wait on X 2 100.0
db file parallel read 49 93.9 2.0 4.1
db file parallel write 2179 68.2 19.9 6.8 4.0 .9 .1 .1
db file scattered read 29K 90.7 6.0 .5 .5 .6 .8 .9
db file sequential read 89K 89.4 2.8 1.3 3.6 1.5 .7 .6
direct path read 140 87.1 2.9 .7 1.4 7.1 .7
direct path write 24 100.0
latch free 3 100.0
latch: messages 1 100.0
latch: shared pool 1 100.0
log file parallel write 2294 77.4 17.3 2.0 1.3 1.1 .8 .2
log file sync 1089 62.4 28.8 3.3 1.7 2.5 1.1 .2
os thread startup 7 100.0
read by other session 4 50.0 25.0 25.0
SQL*Net message from clien 120K 95.2 1.6 .9 .3 .1 .2 .1 1.7
SQL*Net message to client 120K 100.0
SQL*Net more data from cli 146 100.0
Streams AQ: qmn coordinato 554 49.1 .2 .2 50.5
Streams AQ: qmn slave idle 275 100.0
Streams AQ: waiting for me 1540 .2 99.8
Streams AQ: waiting for ti 25 36.0 16.0 48.0
class slave wait 7 85.7 14.3
pmon timer 2577 .5 .1 .1 99.3
rdbms ipc message 25K 2.3 1.3 1.4 .4 .4 .3 32.1 61.8
smon timer 26 100.0
wait for unread message on 7631 .0 .0 100.0 .0 -
Storing input stream in a buffer
Hi, what is the best way to read an input stream and save it into a buffer?
Thanks,
AndreaNot sure if any of the following are an improvement or not. But here are just a few thoughts. Why handle all the byte array allocation yourself? IMO, it would be easier to simply use a ByteArrayOutputStream at that point. The listeners can call toByteArray() and be passed an offset and length of what was read.
Now that I think about it, you could probably make it a bit more generic even than that. For example, here is a class I end up using all the time:
public final class Pipe
/** Buffer size to read in and output. */
private static final int DEFAULT_BUFFER_SIZE = 2048;
* Private constructor. Use public static facade methods instead.
private Pipe()
super();
* Pipes the specified binary data to the specified output stream.
* @param target Binary data to output
* @param out Stream to write
* @throws IOException
public static final void pipe(final byte[] target, final OutputStream out)
throws IOException
assert (target != null) : "Missing byte array";
assert (out != null) : "Missing output stream";
pipe(new ByteArrayInputStream(target), out, true);
* Reads from the specified input stream and returns all data as an in-memory.
* binary array. Note: Since streams may be of any arbitrary size, this
* method requires that you wrap your original stream in a {@link FiniteInputStream}.
* Please ensure that this method is only used to read in data under, say, 2mB.
* @param in Stream to read
* @return byte[] Binary data read
* @throws IOException
public static final byte[] pipe(final FiniteInputStream in)
throws IOException
ByteArrayOutputStream byteOut = new ByteArrayOutputStream();
pipe(in, byteOut, true);
return byteOut.toByteArray();
* Reads from the specified input stream and outputs immediately to the specified.
* output stream. <br>
* <br>
* If you have any confusion about which <code>Pipe</code> method to use, choose
* this one. It has the lowest memory overhead and is the most efficient. Always
* choose this method when streaming large amounts of data or content.
* @param in Input stream to read
* @param out Output stream to write
* @param close Close both stream if true
* @return long Byte count piped
* @throws IOException
public static final long pipe(final InputStream in, final OutputStream out, final boolean close)
throws IOException
assert (in != null) : "Missing input stream";
assert (out != null) : "Missing output stream";
long bytesPiped = 0L;
byte[] buffer = new byte[DEFAULT_BUFFER_SIZE];
try
int bytesRead = in.read(buffer);
while (bytesRead >= 0)
if (bytesRead > 0)
out.write(buffer, 0, bytesRead);
bytesPiped = bytesPiped + bytesRead;
bytesRead = in.read(buffer);
out.flush();
return bytesPiped;
finally
if (close)
IoHelper.close(in);
IoHelper.close(out);
}You simply could add the listener feature (which I do think is useful and elegant). Note the FiniteInputStream and FiniteOutputStream are just sub-classes of FilterInputStream and FilterOutputStream that allow you to limit how many bytes are piped, if desired.
- Saish -
Hi
I am using Schulmberger Smart Card Tool Kit in order to create applet and applet instances.
I have created the applet inside the card but I couldn't create an applet instance. Because when I am going to create the applet instance by using the software it asks the install parameters. Still I don't know does it ask the parameters to call the method
// Install method
public static void install(byte buffer[],short offset,byte length){
// create a CryptoTest applet instance
new CryptoTest(buffer, offset, length);
} // end of install method
How should I give such parameters. Please reply me soon
If you can give me an example, That is perfect
Thanks
Best regards
Denzil jayasinghe
gayandenzil@gmail,comNormally installation parameters are just passed to the Applet itself. It can evaluate it in the install method. The system specific parameters may be used by the OS to set minimum memory limits. What install parameters are required by your smart card OS, you should find in the data sheet or contact Gemalto customer support (Schlumberger --> Axalto --> Gemalto).
For reference check GP 2.1.1 card specification, 9.5 INSTALL Command, 9.5.2.3.6 INSTALL [for load] and INSTALL [for install] Parameters, Table 9-36: Install Parameter Tags -
Manual calculation of 'Buffer Cache hit ratio'
Using: Oracle 10.2.0.1.0, Redhat 4, 64bit.
Manual calculation of ‘Buffer Cache hit ratio’ is very off from what it shown in statspack.
Statspack shows:
Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 99.97
Buffer Hit %: 99.18 In-memory Sort %: 100.00
Library Hit %: 90.43 Soft Parse %: 52.56
Execute to Parse %: 80.14 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 96.21 % Non-Parse CPU: 56.89
Manual calculation (Got this formula from Sybex PT book, page 275):
SQL> select name, 1-(PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS))
from v$buffer_pool_statistics;
NAME 1-(PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS))
DEFAULT .700247215Any idea, why using v$buffer_pool_statistics gives wrong results?
Thanks regards,SYS@oradocms11> select (P1.value + P2.value - P3.value)/(P1.value + P2.value)*100
2 from v$sysstat P1, v$sysstat P2, v$sysstat P3
3 where P1.name = 'db block gets'
4 and P2.name = 'consistent gets'
5 and P3.name = 'physical reads';
(P1.VALUE+P2.VALUE-P3.VALUE)/(P1.VALUE+P2.VALUE)*100
99.6977839
SYS@oradocms11> select name, 1 - (PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS)) from v$buffer_pool_statistics
NAME 1-(PHYSICAL_READS/(DB_BLOCK_GETS+CONSISTENT_GETS))
DEFAULT .997009542
In my case, both are giving almost same result. -
Add a Line ROI from a Camera image to a buffer
Hello! I am new in LabView, and found this board with a lot of helpful answers to my questions. But I have still some left.
I would like to simulate a Line-Camera. Actually, I use a USB Webcam to aquire images. I wohld like to extrakt one line of each frame (for example the line 100 from top) und save it into a buffer.
1.) How can I extract exactly one row?
2.) How do I put the extracted line from the second image under the first one and so on... ?`
Would be great if I could get some hints!
Thnaks a lot
JeanluxHello Jeanlux,
you have two possibilities to extract one row in LabVIEW
But therefor you need necessarily the Vision Development Module for LV. This Module contain the programmable Vision Fuctions for LabVIEW
30 Days - Demo Downloadlink : http://digital.ni.com/softlib.nsf/websearch/4893086293FB4799862571CA004FB606?opendocument&node=13207...
in the LV - AddOn tool you cant chosse between the IMAQ Getrowcol.vi and IMAQ Extract.vi
IMAQ Getrowcol.vi
Extracts a range of pixel values, either a row or column, from an image.
Related Examples
Refer to Examples\Vision\3. Applications\Gauging Example.llb for an example using this VI.
IMAQ Extract.vi
Extracts (reduces) an image or part of an image with adjustment of the horizontal and vertical resolution.
Related Examples
Refer to the following examples that use this VI:
Examples\Vision\2. Functions\Image Management\Extract Example.vi
Examples\Vision\3. Applications\Mechanical Assembly Example.vi
best regards
Uwe -
PSE4. Save as/JPEG Options/Quality. Can I select a default=12?
XP Pro, 3GB RAM
Due solely to carelessness, I have several times overwritten a JPG file of high resolution with one of low resolution, using the JPEG Options dialog window. This has happened when the setting is 3 on the scale of 0 to 12, but I did not select the "3" setting. Something in the PSE4 logic must set it, I presume. When I am closing a batch of images, and it is preset to 3, I manually set it to 12, and for several successive files, some logic resets it back to 3. After several close/save instances, the logic seems to get it that I want 12, and it quits resetting it to 3. (Now retired, I have decades of experience as a professional computer programmer, so I know that there must be some code that sets the quality to 3.)
I have often been able to recover from my overlooking the quality = 3 setting, by rescanning, reloading from my camera memory card, etc., but sometimes it is too late. I want to be able to set a "preference" for the quality to be 12, all the time, obviously with the option to reduce it when I want. Does this functionality exist in PSE4?> I want to be able to set a "preference" for the quality to be 12, all the
> time, obviously with the option to reduce it when I want. Does this
> functionality exist in PSE4?
I don't think so. PSE warns you though, when overwriting a file. That may
be a good opportunity to reset the quality. Which brings up another point;
the original image file should really not be overwritten, but kept like a
"negative", so one can go back to it if an accident like you described
happens.
Juergen -
i have a problem with the buffer busy wait, i got that message from spotlight my buffer cache hit ratio will become red and 100%, can someone help me to solve this problem?
Spotlight / Foglight alarms are very miss leading at times, XYZ being busy for 5 or 10 seconds can be fine / ignored, but in those tools it highlight as red and makes it look like DB is performing poor.
My suggestion is, for the DB you have concern, run statspack taking snaps for few hours, if stats report shows issue then I would only start being worried and then investigate.
On of my db has below from stats, pack, No complaints form customers or developers, app running fine. But if i plug spolight on that db, for sure I will get few red or amber lights. (of course there are times when I find Spotlight very useful).
Instance Efficiency Percentages
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 97.64 In-memory Sort %: 100.00
Library Hit %: 97.37 Soft Parse %: 84.98
Execute to Parse %: 87.42 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 98.05 % Non-Parse CPU: 51.80Regards, -
Diff. between SAVE & SPOOL
What is difference between SAVE & SPOOL command in SQL Prompt?
And when do I opt either command for accessing subsequent query for later use?SAVE to same the buffer to the name file. Example below saves the last used sql commands.
SPOOL is used to send queries and their output to a file.
SQL> select sysdate from dual;
SYSDATE
23-DEC-06
SQL> save C:/x.txt
Created file C:/x.txt
SQL> spool x1.txt
SQL> select sysdate from dual;
SYSDATE
23-DEC-06
SQL> spool off
SQL> -
This just started happening randomly a couple months ago. Now it happens on every sleep or screen saver instance. I cannot trace it to a specific OSX update. I have gone as far as a full OSX recovery and restoration of Time Machine backup. Where can I look to get to the bottom of this?
Issue resolved with Mavericks update...
-
Hello again,
Sorry for asking a lot of questions, but I'm developing an application in jmf and I don't know so much how it works...
I'm trying to receive rtp data from the network, then uncompress to RAW format (but it has to be coded in some codec like g711, g729 or GSM) and save it into a buffer. For this I'm following the example DataSourceReader. I have attached some things like keep off the rtp header, and save the audio data into a bytebuffer (instead of using the printInfo() method I'm using saveIntoByteBuffer() method).
Another class is sending the data (saved in byteBuffer) over the network (UDP, not RTP-UDP).
And anotherone is receiving this data, and creating a DataSource to play the audio data received.
To do this, I don't know exactly how to do it. I need to create a DataSource like the examples (LiveStream and DataSource from [http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/LiveData.html |http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/LiveData.html] ) or can I do it like JpegImagesToMovie from [http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/JpegImagesToMovie.html|http://java.sun.com/javase/technologies/desktop/media/jmf/2.1.1/solutions/JpegImagesToMovie.html] ?
Sorry if it's an easy question, but I don't understand so much why implementing the 'same' thing (custom DataSource) it's implemented different.
ThanksSorry if it's an easy question, but I don't understand so much why implementing the 'same' thing (custom DataSource) it's implemented different.They aren't the same thing, actually. DataSource is the parent class, but there are 2 different kinds of DataSources.
A PushBufferDataSource will "push" the data out when it's available to be read. Whenever it decides it has data ready to be read, it informs whatever is reading from it that data is available.
A PullBufferDataSource will not do that. Whenever it has data ready to be read, it doesn't do anything to inform what's reading from it.
The next obvious question is, why does it matter?
PullBufferDataSource's are good for situations where the data is always present. For instance, if you're playing a file from your hard drive, it's better to just let your Player object fetch data when it's needed. There's no need for a "data available" event, because, the data is always available...
PushBufferDataSources are good for situations where data is being generated / received from an outside source. You can't read from it until the data comes in, so rather than blocking and waiting for the read, it'll tell your reader class when to come back for the data.
Hope that helps!
P.S. For your needs, you'll want to be using a PushBufferDataSource, so the Live example code. -
Error while strting a process from jsp on a JBoss JBpm server
Hello!
I created a very simple jsp file just to try to start a process from it. I'm using JBoss JBpm.
This is my jsp file:
<%@ page import="org.jbpm.*" %>
<%@ page import="org.jbpm.graph.def.*" %>
<%@ page import="org.jbpm.graph.exe.*" %>
<%@ page import="org.jbpm.db.*" %>
<%!
private void startProcessDefinition() {
String processDefinitionName = new String("websale");
JbpmConfiguration jbpmConfiguration = JbpmConfiguration.getInstance();
JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext();
try {
GraphSession graphSession = jbpmContext.getGraphSession();
ProcessDefinition definition = graphSession.findLatestProcessDefinition(processDefinitionName);
ProcessInstance instance = definition.createProcessInstance();
instance.signal();
jbpmContext.save(instance);
}finally{
jbpmContext.close();
%>
<%@ page language="java" contentType="text/html; charset=ISO-8859-1"
pageEncoding="ISO-8859-1"%>
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1">
<title>Insert title here</title>
</head>
<body>
Jup3!
<% startProcessDefinition();%>
</body>
</html>But unfortunately I get this error:
Stacktrace:
org.apache.jasper.servlet.JspServletWrapper.handleJspException(JspServletWrapper.java:504)
org.apache.jasper.servlet.JspServletWrapper.service(JspServletWrapper.java:393)
org.apache.jasper.servlet.JspServlet.serviceJspFile(JspServlet.java:314)
org.apache.jasper.servlet.JspServlet.service(JspServlet.java:264)
javax.servlet.http.HttpServlet.service(HttpServlet.java:810)
org.jboss.web.tomcat.filters.ReplyHeaderFilter.doFilter(ReplyHeaderFilter.java:96)
...Does anyone have an idea where the problem is?
Thanks for help.Sorry, no idea what the problem is.
I suggest creating a servlet and moving all your business logic there. Test it and verify it works. Then put its result in request scope via request.setAttribute(),
then in the JSP page, get the value via request.getAttribute(). This is in following with not having any business logic in the JSP page (separation of concerns).
It also makes it far easier to debug and set breakpoints (can be done in servlet, not in JSP). -
Hi: I'm analyzing this STATSPACK report: it is "volume test" on our UAT server, so most input is from 'bind variables'. Our shared pool is well utilized in oracle. Oracle redo logs is not appropriately configured on this server, as in 'Top 5 wait events' there are 2 for redos.
I need to know what else information can be dig-out from 'foreground wait events' & 'background wait events', and what can assist us to better understanding, in combination of 'Top 5 wait event's, that how the server/test went? it could be overwelming No. of wait events, so appreciate any helpful diagnostic or analysis. Database is oracle 11.2.0.4 upgraded from 11.2.0.3, on IBM AIX power system 64bit, level 6.x
STATSPACK report for
Database DB Id Instance Inst Num Startup Time Release RAC
~~~~~~~~ ----------- ------------ -------- --------------- ----------- ---
700000XXX XXX 1 22-Apr-15 12:12 11.2.0.4.0 NO
Host Name Platform CPUs Cores Sockets Memory (G)
~~~~ ---------------- ---------------------- ----- ----- ------- ------------
dXXXX_XXX AIX-Based Systems (64- 2 1 0 16.0
Snapshot Snap Id Snap Time Sessions Curs/Sess Comment
~~~~~~~~ ---------- ------------------ -------- --------- ------------------
Begin Snap: 5635 22-Apr-15 13:00:02 114 4.6
End Snap: 5636 22-Apr-15 14:00:01 128 8.8
Elapsed: 59.98 (mins) Av Act Sess: 0.6
DB time: 35.98 (mins) DB CPU: 19.43 (mins)
Cache Sizes Begin End
~~~~~~~~~~~ ---------- ----------
Buffer Cache: 2,064M Std Block Size: 8K
Shared Pool: 3,072M Log Buffer: 13,632K
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ ------------------ ----------------- ----------- -----------
DB time(s): 0.6 0.0 0.00 0.00
DB CPU(s): 0.3 0.0 0.00 0.00
Redo size: 458,720.6 8,755.7
Logical reads: 12,874.2 245.7
Block changes: 1,356.4 25.9
Physical reads: 6.6 0.1
Physical writes: 61.8 1.2
User calls: 2,033.7 38.8
Parses: 286.5 5.5
Hard parses: 0.5 0.0
W/A MB processed: 1.7 0.0
Logons: 1.2 0.0
Executes: 801.1 15.3
Rollbacks: 6.1 0.1
Transactions: 52.4
Instance Efficiency Indicators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 99.98 Optimal W/A Exec %: 100.00
Library Hit %: 99.77 Soft Parse %: 99.82
Execute to Parse %: 64.24 Latch Hit %: 99.98
Parse CPU to Parse Elapsd %: 53.15 % Non-Parse CPU: 98.03
Shared Pool Statistics Begin End
Memory Usage %: 10.50 12.79
% SQL with executions>1: 69.98 78.37
% Memory for SQL w/exec>1: 70.22 81.96
Top 5 Timed Events Avg %Total
~~~~~~~~~~~~~~~~~~ wait Call
Event Waits Time (s) (ms) Time
CPU time 847 50.2
enq: TX - row lock contention 4,480 434 97 25.8
log file sync 284,169 185 1 11.0
log file parallel write 299,537 164 1 9.7
log file sequential read 698 16 24 1.0
Host CPU (CPUs: 2 Cores: 1 Sockets: 0)
~~~~~~~~ Load Average
Begin End User System Idle WIO WCPU
1.16 1.84 19.28 14.51 66.21 1.20 82.01
Instance CPU
~~~~~~~~~~~~ % Time (seconds)
Host: Total time (s): 7,193.8
Host: Busy CPU time (s): 2,430.7
% of time Host is Busy: 33.8
Instance: Total CPU time (s): 1,203.1
% of Busy CPU used for Instance: 49.5
Instance: Total Database time (s): 2,426.4
%DB time waiting for CPU (Resource Mgr): 0.0
Memory Statistics Begin End
~~~~~~~~~~~~~~~~~ ------------ ------------
Host Mem (MB): 16,384.0 16,384.0
SGA use (MB): 7,136.0 7,136.0
PGA use (MB): 282.5 361.4
% Host Mem used for SGA+PGA: 45.3 45.8
Foreground Wait Events DB/Inst: XXXXXs Snaps: 5635-5636
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg %Total
%Tim Total Wait wait Waits Call
Event Waits out Time (s) (ms) /txn Time
enq: TX - row lock contentio 4,480 0 434 97 0.0 25.8
log file sync 284,167 0 185 1 1.5 11.0
Disk file operations I/O 8,741 0 4 0 0.0 .2
direct path write 13,247 0 3 0 0.1 .2
db file sequential read 6,058 0 1 0 0.0 .1
buffer busy waits 1,800 0 1 1 0.0 .1
SQL*Net more data to client 29,161 0 1 0 0.2 .1
direct path read 7,696 0 1 0 0.0 .0
db file scattered read 316 0 1 2 0.0 .0
latch: shared pool 144 0 0 2 0.0 .0
CSS initialization 30 0 0 3 0.0 .0
cursor: pin S 10 0 0 9 0.0 .0
row cache lock 41 0 0 2 0.0 .0
latch: row cache objects 19 0 0 3 0.0 .0
log file switch (private str 8 0 0 7 0.0 .0
library cache: mutex X 28 0 0 2 0.0 .0
latch: cache buffers chains 54 0 0 1 0.0 .0
latch free 290 0 0 0 0.0 .0
control file sequential read 1,568 0 0 0 0.0 .0
log file switch (checkpoint 4 0 0 6 0.0 .0
direct path sync 8 0 0 3 0.0 .0
latch: redo allocation 60 0 0 0 0.0 .0
SQL*Net break/reset to clien 34 0 0 1 0.0 .0
latch: enqueue hash chains 45 0 0 0 0.0 .0
latch: cache buffers lru cha 7 0 0 2 0.0 .0
latch: session allocation 5 0 0 1 0.0 .0
latch: object queue header o 6 0 0 1 0.0 .0
ASM file metadata operation 30 0 0 0 0.0 .0
latch: In memory undo latch 15 0 0 0 0.0 .0
latch: undo global data 8 0 0 0 0.0 .0
SQL*Net message from client 6,362,536 0 278,225 44 33.7
jobq slave wait 7,270 100 3,635 500 0.0
SQL*Net more data from clien 7,976 0 15 2 0.0
SQL*Net message to client 6,362,544 0 8 0 33.7
Background Wait Events DB/Inst: XXXXXs Snaps: 5635-5636
-> Only events with Total Wait Time (s) >= .001 are shown
-> ordered by Total Wait Time desc, Waits desc (idle events last)
Avg %Total
%Tim Total Wait wait Waits Call
Event Waits out Time (s) (ms) /txn Time
log file parallel write 299,537 0 164 1 1.6 9.7
log file sequential read 698 0 16 24 0.0 1.0
db file parallel write 9,556 0 13 1 0.1 .8
os thread startup 146 0 10 70 0.0 .6
control file parallel write 2,037 0 2 1 0.0 .1
Log archive I/O 35 0 1 30 0.0 .1
LGWR wait for redo copy 2,447 0 0 0 0.0 .0
db file async I/O submit 9,556 0 0 0 0.1 .0
db file sequential read 145 0 0 2 0.0 .0
Disk file operations I/O 349 0 0 0 0.0 .0
db file scattered read 30 0 0 4 0.0 .0
control file sequential read 5,837 0 0 0 0.0 .0
ADR block file read 19 0 0 4 0.0 .0
ADR block file write 5 0 0 15 0.0 .0
direct path write 14 0 0 2 0.0 .0
direct path read 3 0 0 7 0.0 .0
latch: shared pool 3 0 0 6 0.0 .0
log file single write 56 0 0 0 0.0 .0
latch: redo allocation 53 0 0 0 0.0 .0
latch: active service list 1 0 0 3 0.0 .0
latch free 11 0 0 0 0.0 .0
rdbms ipc message 314,523 5 57,189 182 1.7
Space Manager: slave idle wa 4,086 88 18,996 4649 0.0
DIAG idle wait 7,185 100 7,186 1000 0.0
Streams AQ: waiting for time 2 50 4,909 ###### 0.0
Streams AQ: qmn slave idle w 129 0 3,612 28002 0.0
Streams AQ: qmn coordinator 258 50 3,612 14001 0.0
smon timer 43 2 3,605 83839 0.0
pmon timer 1,199 99 3,596 2999 0.0
SQL*Net message from client 17,019 0 31 2 0.1
SQL*Net message to client 12,762 0 0 0 0.1
class slave wait 28 0 0 0 0.0
thank you very much!Hi: just know it now: it is a large amount of 'concurrent transaction' designed in this "Volume Test" - to simulate large incoming transaction volme, so I guess wait in eq:TX - row is expected.
The fact: (1) redo logs at uat server is known to not well-tune for configurations (2) volume test slow 5%, however data amount in its test is kept the same by each time import production data, by the team. So why it slowed 5% this year?
The wait histogram is pasted below, any one interest to take a look? any ideas?
Wait Event Histogram DB/Inst: XXXX/XXXX Snaps: 5635-5636
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)
Total ----------------- % of Waits ------------------
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
ADR block file read 19 26.3 5.3 10.5 57.9
ADR block file write 5 40.0 60.0
ADR file lock 6 100.0
ARCH wait for archivelog l 14 100.0
ASM file metadata operatio 30 100.0
CSS initialization 30 100.0
Disk file operations I/O 9090 97.2 1.4 .6 .4 .2 .1 .1
LGWR wait for redo copy 2447 98.5 .5 .4 .2 .2 .2 .1
Log archive I/O 35 40.0 8.6 25.7 2.9 22.9
SQL*Net break/reset to cli 34 85.3 8.8 5.9
SQL*Net more data to clien 29K 99.9 .0 .0 .0 .0 .0
buffer busy waits 1800 96.8 .7 .7 .6 .3 .4 .5
control file parallel writ 2037 90.7 5.0 2.1 .8 1.0 .3 .1
control file sequential re 7405 100.0 .0
cursor: pin S 10 10.0 90.0
db file async I/O submit 9556 99.9 .0 .0 .0
db file parallel read 1 100.0
db file parallel write 9556 62.0 32.4 1.7 .8 1.5 1.3 .1
db file scattered read 345 72.8 3.8 2.3 11.6 9.0 .6
db file sequential read 6199 97.2 .2 .3 1.6 .7 .0 .0
direct path read 7699 99.1 .4 .2 .1 .1 .0
direct path sync 8 25.0 37.5 12.5 25.0
direct path write 13K 97.8 .9 .5 .4 .3 .1 .0
enq: TX - row lock content 4480 .4 .7 1.3 3.0 6.8 12.3 75.4 .1
latch free 301 98.3 .3 .7 .7
latch: In memory undo latc 15 93.3 6.7
latch: active service list 1 100.0
latch: cache buffers chain 55 94.5 3.6 1.8
latch: cache buffers lru c 9 88.9 11.1
latch: call allocation 6 100.0
latch: checkpoint queue la 3 100.0
latch: enqueue hash chains 45 97.8 2.2
latch: messages 4 100.0
latch: object queue header 7 85.7 14.3
latch: redo allocation 113 97.3 1.8 .9
latch: row cache objects 19 89.5 5.3 5.3
latch: session allocation 5 80.0 20.0
latch: shared pool 147 90.5 1.4 2.7 1.4 .7 1.4 2.0
latch: undo global data 8 100.0
library cache: mutex X 28 89.3 3.6 3.6 3.6
log file parallel write 299K 95.6 2.6 1.0 .4 .3 .2 .0
log file sequential read 698 29.5 .1 4.6 46.8 18.9
log file single write 56 100.0
log file switch (checkpoin 4 25.0 50.0 25.0
log file switch (private s 8 12.5 37.5 50.0
log file sync 284K 93.3 3.7 1.4 .7 .5 .3 .1
os thread startup 146 100.0
row cache lock 41 85.4 9.8 2.4 2.4
DIAG idle wait 7184 100.0
SQL*Net message from clien 6379K 86.6 5.1 2.9 1.3 .7 .3 2.8 .3
SQL*Net message to client 6375K 100.0 .0 .0 .0 .0 .0 .0
Wait Event Histogram DB/Inst: XXXX/xxxx Snaps: 5635-5636
-> Total Waits - units: K is 1000, M is 1000000, G is 1000000000
-> % of Waits - column heading: <=1s is truly <1024ms, >1s is truly >=1024ms
-> % of Waits - value: .0 indicates value was <.05%, null is truly 0
-> Ordered by Event (idle events last)
Total ----------------- % of Waits ------------------
Event Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
SQL*Net more data from cli 7976 99.7 .1 .1 .0 .1
Space Manager: slave idle 4086 .1 .2 .0 .0 .3 3.2 96.1
Streams AQ: qmn coordinato 258 49.2 .8 50.0
Streams AQ: qmn slave idle 129 100.0
Streams AQ: waiting for ti 2 50.0 50.0
class slave wait 28 92.9 3.6 3.6
jobq slave wait 7270 .0 100.0
pmon timer 1199 100.0
rdbms ipc message 314K 10.3 7.3 39.7 15.4 10.6 5.3 8.2 3.3
smon timer 43 100.0
Maybe you are looking for
-
Virtual pc 7 problem. please help me :(
hi all. i use powerbook G4 1.67GHZ CPU (MAC OS X 10.4.4) my windows XP (installed under virtual pc 7) displays my CPU is : 293MHZ ( i just cant believe my eyes)..... aside, my video card becomes: S3 Trio32/64 and it has 4mb memory only. any1 could he
-
Please Help Interpret This Panic Report
Greetings -- I've had four kernel panics in the past couple of days. After the third one, I restored my boot volume from a cloned backup, just in case something I had installed or changed was the source of the trouble. That got me through a full day
-
F110 - reference number not getting updated.
Hi, The reference number in the line items after running F110 is not getting updated. Is there any configuration I need to do . Please help. Thanks Sabu
-
Can't print to HP LaserJet Prof. P1102w_1 wireless printer with my iPad.
I cannot print to HP LaserJet Prof. P1102w_1 wireless printer with my iPad although it appears that this is a compatible printer. My computer prints to this printer. All other components of my wireless options and router are working.
-
Windows Partition on an external drive
I have this set up with 2 internals with osX .9 on one for final cut & osX .6 for media 100i on the other( can't upgrade the os because of the version of Media 100). I have audio editing software for win2k or XP that isn't available for mac. I have a