GZIP decompression of chunked data?
I'm trying to decompress a chunked stream of GZIP compressed data, but I don't know how to solve this without major inefficient workarounds.
The data is coming from a web server, and is sent chunked. This means that before each chunk, the size of the chunk is announced in plain-text, or 0 to terminate.
Simply wrapping the socket stream with the GZIPInputStream, like in the examples, only works if the stream is entirely GZIP, but this is not the case here.
I have to repeatedly do a readLine on the input stream to get the length of the chunk, and then I need to send that amount of bytes from the input stream to the GZIP decompressor. I'm stuck here as I don't know a way to 'send' selected bytes to the decompressor. I only know how to create the decompressor by wrapping an existing input stream, and simply by creating said decompressor it consumes bytes from the input stream to verify if it is a GZIP compressed stream.
The only thing I can come up with is to store the entire compressed data in a huge String, wrap it in a custom-made InputStream subclass that streams bytes from my String, and wrap that in the decompressor. Is this really the only way?
For example, a webpage like this sends it's data chunked and GZIP-compressed: http://www.anidb.net
Edited by: 834306 on Feb 6, 2011 11:16 AM
Edited by: 834306 on Feb 7, 2011 4:54 AM
Thanks for your responses.
I indeed thought I had to de-chunk manually. I'm fairly new to all this networking stuff. I've never even heard of a HttpURLConnection.
What I did before was opening a Socket, sending the manually-constructed request header and then receiving and manually decoding the http header. Something like this (shortened for readability)
socket = new Socket(url.getHost(), port);
dataOutputStream = new DataOutputStream(socket.getOutputStream());
inputStream = new BufferedInputStream(socket.getInputStream());
String message = "GET " + url.getFile() + " HTTP/1.1" + //
"\nHost: " + url.getHost() + "" + //
"\nUser-Agent: Mozilla/1.2 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.13)" + //
"\nAccept: text/html,application/xml" + //
"\nAccept-Language: en-us,en;q=0.5" + //
"\nAccept-Charset: ISO-8859-1" + //
"\nConnection: close\n\n");
dataOutputStream.write(message.getBytes());
<here a lot of code to decode the header and put the field-value pairs in a HashMap>
<here a lot of code to receive the body of the message, while taking into account the reported content-length and chunking, as well as sending updates to a ProgressListener>I tried HttpURLConnection and indeed it seems a lot easier than the way I did things. However, what I receive from the HttpURLConnection is de-chunked but still GZIP compressed. So it is not completely transparent as I hoped. I wrapped in a decoder and it works.
Here's what I have now:
HttpURLConnection connection = null;
try
connection = (HttpURLConnection)url.openConnection();
connection.addRequestProperty("Host", url.getHost());
connection.addRequestProperty("User-Agent", "Mozilla/1.2 (Windows; U; Windows NT 5.1; en-US; rv:1.9.1.13)");
connection.addRequestProperty("Accept", "text/html,application/xml");
connection.addRequestProperty("Accept-Language", "en-us,en;q=0.5");
connection.addRequestProperty("Accept-Charset", "ISO-8859-1");
connection.addRequestProperty("Connection", "close");
connection.setUseCaches(false);
connection.setDoInput(true);
InputStream inp = connection.getInputStream();
if (connection.getHeaderField("Content-Encoding").equals("gzip"))
inp = new GZIPInputStream(inp);
int i;
while ((i = inp.read()) != -1)
logout.write(i);
connection.disconnect();
catch (IOException e)
if (connection != null) connection.disconnect();
}Thanks a bunch.
Edited by: 834306 on Feb 7, 2011 5:01 AM
Similar Messages
-
Gzip decompression within Flex
Hello everybody,
I want to decode the Xml data with Gzip encoding from the server. With Google I only find examples of Gzip encoding for the AIR runtime.
Have anybody experience how to use Gzip decompression for server data in Flex?
I would be grateful for any advice.
Thanks a lot, ThomasByteArray.uncompress() might help some.
Alex Harui
Flex SDK Developer
Adobe Systems Inc.
Blog: http://blogs.adobe.com/aharui -
Hello All!
I'm inserting a table with millions of records, from another table. I'm also parsing out some XML data. My problem is this insert takes hours to do, when I leave for home at night, my session times out, and everything roll back to 0 rows inserted.
I want to commit to 10000 at a time, so when it does time out, I can pick up where I left off. I tried something below, but it doesn't save the records after I lose a connection. Any Thoughts?
Thanks Rudy
DECLARE @i INT = 1
WHILE @i <= 10000 BEGIN
IF @i % 10000 = 1
BEGIN TRANSACTION;
INSERT INTO CourtRecordEvent
(CaseNumber, CountyNumber, HistorySequenceNumber, EventType, EventDate, Tag, Timezone, [Description],
ctofcFirstName, ctofcLastName, ctofcMiddleName, ctofcSuffix, sealCtofcFirstName, sealCtofcLastName, sealCtofcMiddleName,
sealCtofcSuffix, courtRptrFirstName, courtRptrLastName, courtRptrMiddleName, courtRptrSuffix, DktText, IsMoneyEnabled, EventAmt, sealCtofcTypeCodeDescr)
SELECT
COALESCE (pref.value('(caseNo/text())[1]', 'varchar(20)'),'0') as CaseNumber,
COALESCE (pref.value('(countyNo/text())[1]', 'int'),'0') as CountyNumber,
COALESCE (pref.value('(histSeqNo/text())[1]', 'int'),'0') as HistorySequenceNumber,
COALESCE (pref.value('(eventType/text())[1]', 'varchar(20)'),'0') as EventType,
COALESCE (pref.value('(eventDate/text())[1]', 'datetime'),'0') as EventDate,
COALESCE (pref.value('(tag/text())[1]', 'varchar(50)'),'0') as Tag,
pref.value('(eventDate_TZ/text())[1]', 'varchar(20)') as Timezone,
pref.value('(descr/text())[1]', 'nvarchar(max)') as [Description],
pref.value('(ctofcNameF/text())[1]', 'nvarchar(150)') as ctofcFirstName,
pref.value('(ctofcNameL/text())[1]', 'nvarchar(150)') as ctofcLastName,
pref.value('(ctofcNameM/text())[1]', 'nvarchar(150)') as ctofcMiddleName,
pref.value('(ctofcSuffix/text())[1]', 'nvarchar(20)') as ctofcSuffix,
pref.value('(sealCtofcNameF/text())[1]', 'nvarchar(150)') as sealCtofcFirstName,
pref.value('(sealCtofcNameL/text())[1]', 'nvarchar(150)') as sealCtofcLastName,
pref.value('(sealCtofcNameM/text())[1]', 'nvarchar(150)') as sealCtofcMiddleName,
pref.value('(sealCtofcSuffix/text())[1]', 'nvarchar(20)') as sealCtofcSuffix,
pref.value('(courtRptrNameF/text())[1]', 'nvarchar(150)') as courtRptrFirstName,
pref.value('(courtRptrNameL/text())[1]', 'nvarchar(150)') as courtRptrLastName,
pref.value('(courtRptrNameM/text())[1]', 'nchar(10)') as courtRptrMiddleName,
pref.value('(courtRptrSuffix/text())[1]', 'int') as courtRptrSuffix,
pref.value('(dktTxt/text())[1]', 'nvarchar(max)') as DktText,
pref.value('(isMoneyEnabled/text())[1]', 'bit') as IsMoneyEnabled,
pref.value('(eventAmt/text())[1]', 'money') as EventAmt,
pref.value('(sealCtofcTypeCodeDescr/text())[1]', 'nvarchar(50)') as sealCtofcTypeCodeDescr
FROM
dbo.CaseHistoryRawData_XML CROSS APPLY
RawData.nodes('//CourtRecordEventCaseHist') AS CourtRec(pref)
IF @i % 10000 = 0
COMMIT;
END
GOYour loop will do 10000 the same big insert of 429 millions of rows.
You need to change you select to insert smaller number of rows each time, e.g. using TOP from unprocessed data or some expression based on "i".
You should do 10000 rows of data in each loop and the loop condition should check, if all data are processed. That way, even if session timeouts, you will be able to restart previous job. -
Unable to decompress large data with CL_ABAP_UNGZIP_BINARY_STREAM
Hello all,
i would like to stream the huge amount of XML data to the application server compressed. It seems that the class pair CL_ABAP_GZIP_BINARY_STREAM / CL_ABAP_UNGZIP_BINARY_STREAM should do this job.
So far the compression works, because the compressed chunks are small enough. I considered the dependency between the buffer length and the overall amount of bytes to be compressed.
The decompression seems to produce some kind memory leak or at least fails for unknown reason. For relatively small compressed data amounts the CL_ABAP_UNGZIP_BINARY_STREAM works just well. Larger files, even if decompressed within a loop in small portions (max 4KB large chunks or smaller), can not be decompressed without to increase the buffer size. For very large files (decompressed in loop in a small chunks) the buffer can not be just large enough and the procedure ends up in an out of memory.
So either i use CL_ABAP_GZIP_BINARY_STREAM / CL_ABAP_UNGZIP_BINARY_STREAM pair wrong or there are some troubles with the memory management inside the C implementation of these classes.
If somebody knows about this problem or has got an idea on how to resolve it, any help would be very welcome!
Here the error message i get:
An exception occurred that is explained in detail below.
The exception, which is assigned to class 'CX_SY_COMPRESSION_ERROR'
The function IctDecompressStream returns the return code 30
Code that i use to decompress:
* buffer
data l_gzip_buff_decomp type xstring.
* buffer size
data l_gzip_buff_len_decomp type i value 16777216.
* empty hex
data c_empty_x type x.
* chunk of compressed data to be decompressed at once
data l_buff type x length 4096.
data: uref type ref to user_outbuf.
data: csref type ref to cl_abap_ungzip_binary_stream.
create object uref.
create object csref
exporting
output_handler = uref.
call method csref->set_out_buf(
importing out_buf = l_gzip_buff_decomp
out_buf_len = l_gzip_buff_len_decomp ).
l_file = 'very-large_file.xml.gz'.
open dataset l_file in binary mode for input.
do.
read dataset l_file into l_buff length l_len.
if l_len > 0.
call method csref->decompress_binary_stream
exporting
gzip_in = l_buff
gzip_in_len = l_len.
else.
close dataset l_file.
exit.
endif.
enddo.
* close the stream and flush the unzipped buffer
call method csref->decompress_binary_stream_end
exporting
gzip_in = c_empty_x
gzip_in_len = 0.Hi Gena,
I'm facing exactly the same problem as you...
Since this post is an old one, I imagine that you may not remember, but I have to try...
Have you solved it? If yes, could you please tell me how?
I've tried to use CL_ABAP_GZIP and CL_ABAP_UNGZIP_BINARY_STREAM and I'm getting the same error 30 at the IctDecompressStream function.
Tks in advance,
Flavio. -
JavaScript, Gzip Compression/Decompression, Ajax calls
Does either the Java Agent or Thin Client execute JavaScript when playing back in eLoad?
Is there a way to configure it to do so? If I recall neither do, but would like confirmation.
What about measuring gzip decompression on each page? Will either agent decompress and measure it? or at least execute it?
What about ajax calls? Would running the Java Agent give us an idea of the performance of the page where lots of ajax calls are being made?jimbilly
1) JavaScript is not executed during a load test by either the Thin Client or the Java Agent, and can not be configured to do so.
2) I'm not sure if either client will decompress a GZip file, however i don't think it would be very difficult to do so using the JavaAgent and writing the time to a file, i don't see any advantage in doing so as every client machine will have different processing power, so the timings will be very different from the once that you will log.
3) All the ajax calls will be recorded and played like normal navigation's, just make sure that you have the proxy on during the recording.
if you need some help doing step 2 please let me know.
Alex -
WebService : error to retrieve big result gzip encoded
Hi, I have a strange error with Results from Webservice SOAP
when data is encoded in gzip or deflate mode.
I have a webservice that return in non encoded mode a
resultset of 95542 Bytes.
The same resultset compressed in gzip is 8251 Bytes.
No problem if there is no encoding between server and Flash
Player (9,0,47 and 9,0,115 used)
If Accept-Encoding header is set to gzip, deflate, then
server send resultset encoded.
Browser receive this resultset (trace with WireShark), but
Flash Player don't load the result and the WebService go to timeout
If I limit the data returned for this webservice by limiting
number of rows returned, Flash is able to handle the result. For
example : uncompressed data of 84070 Bytes give an encoded
resultset of 7506 Bytes and theses data are well handled by flash
player.
I don't understand where is the problem.
Does flash player have gzip decompression limitation ?
Please help
ThanksHi, I have a strange error with Results from Webservice SOAP
when data is encoded in gzip or deflate mode.
I have a webservice that return in non encoded mode a
resultset of 95542 Bytes.
The same resultset compressed in gzip is 8251 Bytes.
No problem if there is no encoding between server and Flash
Player (9,0,47 and 9,0,115 used)
If Accept-Encoding header is set to gzip, deflate, then
server send resultset encoded.
Browser receive this resultset (trace with WireShark), but
Flash Player don't load the result and the WebService go to timeout
If I limit the data returned for this webservice by limiting
number of rows returned, Flash is able to handle the result. For
example : uncompressed data of 84070 Bytes give an encoded
resultset of 7506 Bytes and theses data are well handled by flash
player.
I don't understand where is the problem.
Does flash player have gzip decompression limitation ?
Please help
Thanks -
Transfer-Encoding: chunked
Hi
I'm creating an http client.. but got problems with the chunked data encoding.. does anybody know a link where there is more info about it? how much should I read from each chunk and where is it written? did anybody face such a problem before? My client tells the server it's firefox:
GET / HTTP/1.1
Host: mail.yahoo.com
User-Agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.7.10) Gecko/20050716 Firefox/1.0.6
Accept: text/xml,application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5
Accept-Language: en-us,en;q=0.5
Accept-Encoding: gzip,deflate
Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7
Keep-Alive: 300
Connection: keep-alive
and the server sends this:
HTTP/1.1 200 OK
Date: Sun, 25 Sep 2005 13:45:31 GMT
P3P: policyref="http://p3p.yahoo.com/w3c/p3p.xml", CP="CAO DSP COR CUR ADM DEV TAI PSA PSD IVAi IVDi CONi TELo OTPi OUR DELi SAMi OTRi UNRi PUBi IND PHY ONL UNI PUR FIN COM NAV INT DEM CNT STA POL HEA PRE GOV"
Cache-Control: private
Connection: close
Transfer-Encoding: chunked
Content-Type: text/html
Content-Encoding: gzip
Set-Cookie: B=9dikf6d1jdafr&b=3&s=5m; expires=Tue, 02-Jun-2037 20:00:00 GMT; path=/; domain=.yahoo.com
1562
please help!
hmm.. looks like nobody is gonna answer.. :(3.6.1 Chunked Transfer Coding
The chunked encoding modifies the body of a message in order to
transfer it as a series of chunks, each with its own size indicator,
followed by an OPTIONAL trailer containing entity-header fields. This
allows dynamically produced content to be transferred along with the
information necessary for the recipient to verify that it has
received the full message.
Chunked-Body = *chunk
last-chunk
trailer
CRLF
chunk = chunk-size [ chunk-extension ] CRLF
chunk-data CRLF
chunk-size = 1*HEX
last-chunk = 1*("0") [ chunk-extension ] CRLF
chunk-extension= *( ";" chunk-ext-name [ "=" chunk-ext-val ] )
chunk-ext-name = token
chunk-ext-val = token | quoted-string
chunk-data = chunk-size(OCTET)
trailer = *(entity-header CRLF)
The chunk-size field is a string of hex digits indicating the size of
the chunk. The chunked encoding is ended by any chunk whose size is
zero, followed by the trailer, which is terminated by an empty line.
The trailer allows the sender to include additional HTTP header
fields at the end of the message. The Trailer header field can be
used to indicate which header fields are included in a trailer (see
section 14.40).
A server using chunked transfer-coding in a response MUST NOT use the
trailer for any header fields unless at least one of the following is
true:
a)the request included a TE header field that indicates "trailers" is
acceptable in the transfer-coding of the response, as described in
section 14.39; or,
b)the server is the origin server for the response, the trailer
fields consist entirely of optional metadata, and the recipient
could use the message (in a manner acceptable to the origin server)
without receiving this metadata. In other words, the origin server
is willing to accept the possibility that the trailer fields might
be silently discarded along the path to the client.
This requirement prevents an interoperability failure when the
message is being received by an HTTP/1.1 (or later) proxy and
forwarded to an HTTP/1.0 recipient. It avoids a situation where
compliance with the protocol would have necessitated a possibly
infinite buffer on the proxy.
An example process for decoding a Chunked-Body is presented in
appendix 19.4.6.
All HTTP/1.1 applications MUST be able to receive and decode the
"chunked" transfer-coding, and MUST ignore chunk-extension extensions
they do not understand. -
Adobe Air needs HTTP gzip compression
Hello
We are developing an Adibe Air application. We use SOAP for
service calls and we depend entirely upon gzip HTTP compression to
make the network performance even vaguely acceptable. SOAP is such
a fat format that it really needs gzip compression to get the
responses down to a reasonable size to pass over the Internet.
Adobe Air does not currently support HTTP gzip compression
and I would like to request that this feature be added ASAP. We
can't release our application until it can get reasonable network
performance through HTTP gzip compression.
Thanks
AndrewHi blahxxxx,
Sorry for the slow reply -- I wanted to take some time to try
this out rather than give an incomplete response.
It's not built into AIR, but if you're using
Flex/ActionScript for your application you can use a gzip library
to decompress a gzipped SOAP response (or any other gzipped
response from a server -- it doesn't have to be SOAP). Danny
Patterson gives an example of how to do that here:
http://blog.dannypatterson.com/?p=133
I've been prototyping a way to make a subclass of the Flex
WebService class that has this built in, so if I can get that
working it would be as easy as using the Flex WebService component.
I did some tests of this technique, just to see for myself if
the bandwidth savings is worth the additional processing overhead
of decompressing the gzip data. (The good news is that the
decompression part is built into AIR -- just not the specific gzip
format -- so the most processor-intensive part of the gzip
decompression happens in native code.)
Here is what I found:
I tested this using the
http://validator.nu/ HTML validator
web service to validate the HTML source of
http://www.google.com/. This
isn't a SOAP web service, but it does deliver an XML response
that's fairly large, so it's similar to SOAP.
The size of the payload (the actual HTTP response body) was
5321 bytes compressed, 45487 bytes uncompressed. I ran ten trials
of each variant. All of this was done in my home, where I have a
max 6Mbit DSL connection. In the uncompressed case I measured the
time starting immediately after sending the HTTP request and ending
as soon as the response was received. In the compressed case I
started the time immediately after sending the HTTP request and
ended it after receiving the response, decompressing it and
assigning the compressed content to a ByteArray (so the compressed
case times include decompression, not just download). The average
times for ten trials were:
Uncompressed (text) response: 1878.6 ms
Compressed (gzip) response: 983.1
Obviously these will vary a lot depending on the payload
size, the structure of the compressed data, the speed of the
network, the speed of the computer, etc. But in this particular
case there's obviously a benefit to using gzipped data.
I'll probably write up the test I ran, including the source,
and post it on my blog. I'll post another reply here once I've done
that. -
GZip Compression (Yes this old chestnut Again)
I have a client who requires this to work in .NET 4 Client Profile so no 4.5 compression improvements for me. All I need is 2 functions one to compress and one to decompress
both of which take in and return unicode strings. I have spent weeks nay months searching tweaking writing etc. to no avail, I have looked at all the existing snippets I could find but they
all either take in or return a Byte() or Stream or something else and when I tweak them to take/return a string they don't work. What I have so far (a snippet found online tweaked) is:
Public Shared Function Decompress(bytesToDecompress As String) As String
Using stream = New IO.Compression.GZipStream(New IO.MemoryStream(System.Text.Encoding.Unicode.GetBytes(bytesToDecompress)), IO.Compression.CompressionMode.Decompress)
Const size As Integer = 4096
Dim buffer = New Byte(size - 1) {}
Using memoryStream = New IO.MemoryStream()
Dim count As Integer
Do
count = stream.Read(buffer, 0, size)
If count > 0 Then
memoryStream.Write(buffer, 0, count)
End If
Loop While count > 0
Return System.Text.Encoding.Unicode.GetString(memoryStream.ToArray())
End Using
End Using
End Function
Public Shared Function Compress(Input As String) As String
Dim bytes() As Byte = System.Text.Encoding.Unicode.GetBytes(Input)
Using stream = New IO.MemoryStream()
Using zipStream = New IO.Compression.GZipStream(stream, IO.Compression.CompressionMode.Compress)
zipStream.Write(bytes, 0, bytes.Length)
Return System.Text.Encoding.Unicode.GetString(stream.ToArray())
End Using
End Using
End Function
However the problem is if you run the following you get nothing out.
Decompress(Compress("Test String"))
Please help me; I know this has been covered to death elsewhere but I am missing something when tweaking I am sure it's something very simple but it is causing me great stress! Many Many Thanks Indeed!I have now found a workaround which may be useful to others so I am posting it here, it doesn't do exactly what was described above which would be ideal but here is the code:
Private Sub WriteCompressed(ByVal Path As String, Data As String)
Using WriteFileStream As IO.FileStream = IO.File.Create(Path)
Using DataStream As New IO.MemoryStream(System.Text.Encoding.Unicode.GetBytes(Data))
Using Compressor As IO.Compression.DeflateStream = New IO.Compression.DeflateStream(WriteFileStream, IO.Compression.CompressionMode.Compress)
DataStream.CopyTo(Compressor)
End Using
End Using
End Using
End Sub
Private Function ReadCompressed(ByVal Path As String) As String
Using ReadFileStream As IO.FileStream = System.IO.File.OpenRead(Path)
Using DataStream As New IO.MemoryStream
Using Decompressor As IO.Compression.DeflateStream = New IO.Compression.DeflateStream(ReadFileStream, IO.Compression.CompressionMode.Decompress)
Decompressor.CopyTo(DataStream)
Return System.Text.Encoding.Unicode.GetString(DataStream.ToArray)
End Using
End Using
End Using
End Function
Many thanks indeed to everyone who helped and any more input is still welcome. Thanks again!
That's great. You should propose your post as the answer to end the thread.
I just finished with a different method. However I will guess the real issue is that strings can not contain various binary values such as Nulls, Delete, Bell, Backspace, etc (as you can see zeros in RTB2 in the image below which are null
values).
I forgot to add that during testing I was receiving errors because compressed data written to a string then read from a string into bytes and decompressed would return errors during the decompressing that the data did not contain a valid GZip header. Therefore
when GZip compresses it uses some format for the compressed data that is known to Gzip. Otherwise it would not have a header in the compressed data.
You can see what binary values (Decimal) would be in a string at this link
Ascii Table.
Therefore I used the code below to get a string from a RichTextBox1 to a byte array. That is compressed and output to a byte array.
The information in the "compressed" byte array is then converted byte by byte to the bytes value in RichTextBox2 with each value separated by a comma.
To decompress the string in RichTextBox2 that text is split into a string array on a comma. Then each item is converted to a byte into a List(Of Byte). The List(Of Byte).ToArray is then decompressed back into a byte array and that byte array is converted
to a string in RichTextBox3.
I suspect the actual compressed byte array could be written to a binary file using the
BinaryWriter Class if the original strings were supposed to be compressed and attached as files for transfer somewhere.
Option Strict On
Imports System.IO
Imports System.IO.Compression
Imports System.Text
Public Class Form1
Private Sub Form1_Load(sender As Object, e As EventArgs) Handles MyBase.Load
Me.Location = New Point(CInt((Screen.PrimaryScreen.WorkingArea.Width / 2) - (Me.Width / 2)), CInt((Screen.PrimaryScreen.WorkingArea.Height / 2) - (Me.Height / 2)))
RichTextBox1.Text = My.Computer.FileSystem.ReadAllText("C:\Users\John\Desktop\Northwind Database.Txt")
Label1.Text = "Waiting"
Label2.Text = "Waiting"
Label3.Text = "Waiting"
Label4.Text = "Waiting"
Button2.Enabled = False
End Sub
Dim BytesCompressed() As Byte
Private Sub Button1_Click(sender As Object, e As EventArgs) Handles Button1.Click
RichTextBox2.Clear()
RichTextBox3.Clear()
Label1.Text = "Waiting"
Label2.Text = "Waiting"
Label3.Text = "Waiting"
Label4.Text = "Waiting"
If RichTextBox1.Text <> "" Then
Dim uniEncoding As New UnicodeEncoding()
BytesCompressed = Compress(uniEncoding.GetBytes(RichTextBox1.Text))
Label1.Text = RichTextBox1.Text.Count.ToString
Label2.Text = BytesCompressed.Count.ToString
For Each Item In BytesCompressed
RichTextBox2.AppendText(CStr(CInt(Item)) & ",")
Next
RichTextBox2.Text = RichTextBox2.Text.Remove(RichTextBox2.Text.Count - 1, 1)
End If
Button2.Enabled = True
End Sub
Private Shared Function Compress(data As Byte()) As Byte()
Using compressedStream = New MemoryStream()
Using zipStream = New GZipStream(compressedStream, CompressionMode.Compress)
zipStream.Write(data, 0, data.Length)
zipStream.Close()
Return compressedStream.ToArray()
End Using
End Using
End Function
Private Sub Button2_Click(sender As Object, e As EventArgs) Handles Button2.Click
Button2.Enabled = False
Dim RTB2Split() As String = RichTextBox2.Text.Split(","c)
Dim BytesToUse As New List(Of Byte)
For Each Item In RTB2Split
BytesToUse.Add(CByte(Item))
Next
Label3.Text = BytesToUse.Count.ToString
Dim uniEncoding As New UnicodeEncoding()
RichTextBox3.Text = uniEncoding.GetString(Decompress(BytesToUse.ToArray))
Label4.Text = RichTextBox3.Text.Count.ToString
End Sub
Private Shared Function Decompress(data As Byte()) As Byte()
Using compressedStream = New MemoryStream(data)
Using zipStream = New GZipStream(compressedStream, CompressionMode.Decompress)
Using resultStream = New MemoryStream()
zipStream.CopyTo(resultStream)
Return resultStream.ToArray()
End Using
End Using
End Using
End Function
End Class
La vida loca -
Step by step guide to create a RFC connection and data transfer
hi
could i get a step by step guide to transfer/read data from SAP and legacy system using the concept of RFC.
Regards
Shivahi
Here are the steps.
SM59 Transaction is used for Connection Establishment with Destination.
When you establish a connection to a client through a destination, the HTTP connection must first be entered in transaction SM59.
There are two types of HTTP connection in transaction SM59: Call transaction SM59 to display the different RFC destinations.
The HTTP connection to the external server (connection type G) and the HTTP connection to the R/3 system (connection type H) are different only in their logon procedures. Their technical settings are the same. To display the technical settings, double-click a connection.
You can choose from three tabs. Under Technical Settings, you can specify the following:
· Target Host: The host to which you want to connect.
Note that if you are using HTTPS as a protocol, you have to specify the full host name (with domain).
· Service No.: Here, you specify the port. The destination host must of course be configured in such a way that the specified port understands the corresponding protocol (HTTP or HTTPS). See Parameterizing the ICM and the ICM Server Cache.
· Path Prefix: At the time when the connection to this destination is initiated, the system inserts this sub-path before ~request_uri.
· HTTP Proxy Options: Here, you can configure a proxy for HTTP connections: You can determine the proxy host and service, as well as users and passwords for the HTTP connection.
The tab page Logon/Security will be different depending on whether you have selected a HTTP connection to an external server (connection type G) or a HTTP connection to an R/3 system (connection type H).
HTTP Connection to an External Server (Connection Type G)
Choose the connection you want to use. You can choose from the following logon procedures:
· No Logon: The server program does not request a user or password.
· Basic Authentication: The server program requests a user and password. Basic Authentication is a standard HTTP method for authenticating users. When a user logs on to the target system, he or she provides a user ID and password as authentication. This information is then sent in a header variable as a Base 64-encoded string to the server, through the HTTP connection.
· SSL Client Certificate: If you use client certificates for authentication, the client authentication is performed through the Secure Sockets Layer (SSL) protocol. In this case, you must also select the SSL Client PSE of the SAP Web AS that contains the relevant certificate for the authentication. The target system must handle the issuer of the SAP Web AS client certificate as a trusted system.
Under Logon/Security, you can also activate SSL to ensure that HTTPS is the protocol used (if you select SSL, make sure that the correct port is entered under Technical Settings). In the security transaction STRUST you can determine which type of SSL client is used. (Getting Started with the Trust Manager, Trust Manager).
The field Authorization for Destination has been implemented as an additional level of protection. We recommend that you specify a user and password for the RFC destination.
HTTP Connection to an R/3 System (Connection Type H)
Here, you can specify more settings for authentication in the target system.
The settings under Technical Settings and Special Options are the same as for connection type G. Under Logon/Security, the connection type H has additional logon procedures. As with external servers, you can activate and deactivate SSL and specify an authorization.
Because the target system is an SAP system, you can set the client and language for the logon as well as the user name and password. If you check Current User, you have to specify the password.
The following authentication procedures are available: Basic Authentication, SAP Standard, and SAP Trusted System, and SSL Client Certificate.
· HTTP Basic Authentication: Logon with user and password
· SAP Standard: This procedure uses an RFC logon procedure. The RFC Single Sign-On (SSO) procedure is valid within the one system. The same SAP user (client, language, and user name) is used for logon.
· SAP Trusted System: Trusted RFC logon to a different SAP system (see Trusted System: Maintaining Trust Relationships Between SAP Systems)).
· SSL Client Certificate: The SSL protocol enables you to use client certificates for the logon.
Type G/H (SM59)
Timeout:
When sending a HTTP request, you can use this parameter to specify the maximum response time for the connection.
HTTP Setting:
You can use the HTTP version to specify the protocol version of the HTTP request (HTTP 1.0 or 1.1).
Compression:
You can use this option to activate the gzip compression for the request body. This can only be activated in HTTP Version 1.1.
Compressed Response:
In the standard setting, the SAP Web Application Server sends the Accept Encoding field as a header field with the value gzip, if the application server can handle this compression. This notifies the partner that the caller can handle gzip decompression, and that the partner can send compressed data. If you want to prevent a compressed response being sent, choose the option No.
HTTP Cookie:
You can use this option to control the way received cookies are handled.
You can specify the following for cookies:
· Accept them automatically
· Reject them automatically
· Accept or reject them in a prompt
· Use a handler for the event IF_HTTP_CLIENT~EVENTKIND_HANDLE_COOKIE to process the cookies in the program.
· In the next section, you can read about the parallelization of requests.
Thanks,
vijay
reward points if helpful -
Hello all,
For a client, I am working on a project where a live RTMP stream is published to an Adobe FMS 3.5.6 server from a java application, using Red5 0.9.1 RTMPClient code.
This works fine, until the timestamp becomes higher than 0xFFFFFF after 4.6 hours, and the RTMP extended timestamp field starts being used. I have already found: when the extended timestamp was written after the header, the last 4 bytes of the data were being cut off. I have fixed this locally, and now the data being sent seems to me to be conformant to the spec. However, FMS still throws an error message in the core log and then kills the connection from the Red5 client. Here is the error message:
This is the error message:
2011-06-03 14:28:02 13060 (e)2611029 Bad network data; terminating connection : chunkstream error:message length 11893407 is longerthan max rtmp packet length -
2011-06-03 14:28:02 13060 (e)2631029 Bad network data; terminating connection : (Adaptor: _defaultRoot_, VHost: _defaultVHost_, IP: 127.0.0.1, App: live/_definst_, Protocol: rtmp, Client: 5290168480216205379, Handle: 2147942405) : 05 FF FF FF 00 13 = 09 01 00 00 00 01 00 01 01 ' 01 00 00 00 00 00 13 4 09 0 00 00 01 ! 9A & L 0F FA F6 12 , B4 A6 CE H 8A AB DC G BB d k 1B 9F ) 13 13 D2 9A E5 t 8 B8 8D 94 ! 8A AE F6 AF } " U 0 D3 Q EF FF ~ 8D 97 D9 FF BE A3 F3 C9 97 o 9D # F9 7F h A4 F7 } / FB & F1 DC 9C BF BD D3 E7 CA 97 FE E2 B9 E4 F7 9E 1A F6 BA } C9 w FC _ / / w FE n EF D7 P 9C F4 BE 82 8E F7 | BE 97 B4 BB D7 FE ED I / FB D1 93 9A F9 X \ 85 BD DD I E3 4 E8 M 13 D3 " ) BE A9 92 E5 83 D4 B4 12 DE D5 A3 E6 F4 k DE BF Q 3 A0 g r A4 f D9 BD w * } F7 r 8A S 2 . AB BD EE ^ l f AF E1 0B $ AF 9D D7 - BF E8 ! D3 } D3 i E3 B8 F2 M A8 " B1 A5 EF s ] A5 BC 96 E5 u e X q D2 F1 r F9 i 92 b EE Z d F9 * A6 BB FD 17 w 4 DD 3 o u EB ] ] EF FE B5 B1 0A F2 A0 DD FD B2 98 DF E8 e F6 CB FD 96 V % A5 D5 k ] FD w EF AF k v AA E8 ! 9F / w BE FA 9A _ E F2 D3 , ? 17 } AD 7 EC B3 } 07 B5 | z { { A5 = 11 90 CF BF ; 4 FE EF 95 F7 E7 DF B9 , AF z 91 CF C9 BD DE CB { F5 17 } F2 E5 D7 DF z E6 [ 96 > Y m 9F EB AF DD D8 E8 v B9 A8 E9 % A7 | 1 CF 8B D Z k N DF F8 N FA S R FE . ~ CB A 9 E1 ) 8F 8E BB EC c 6 13 F1 AC FD FD FC 8A F7 F3 K B9 FA ^ / A4 FC B9 AA F6 DE C2 [ 1A E c r B3 BF E5 EC B5 x 94 FD . A9 t I Q % EA EC DE | K FE z A4 97 F9 " 1 0F CA FB F5 F5 p 9E 99 3 - ; B8 F4 F1 FF t A3 EC BC # DE AC 91 13 19 o < 06 F5 FD 7F 7 _ $ D B t B5 0D 8A C1 C1 BA 0B FE DB B7 83 _ } BD z F7 CB { FC M A9 8D = D5 B1 < 85 = EF E1 ; BA H y FC BC B4 C A2 D9 ` e E4 94 H 5 13 ' 93 93 8E E C2 1C R 97 9 X B7 FF 10 9F { ) F1 CF AB AC ] EE H A2 DE D3 C5 m F6 K A2 A7 A2 89 D2 z EB DF 97 ^ k 9E 99 BB E7 B6 97 w { ~ + C7 B2 } FE ' C4 | B6 o H DD r A8 9F DC FF F9 Q b l 93 T B6 EE FF 11 j CD s P C F1 3 R I F8 D8 R 9D 93 AA D5 + DE FC BE " B9 E1 ` CB BD 0F F5 C7 AA w CF 8D p 9A F7 g f N FF 84 B7 K Q 93 g E1 - D3 s } w v AE 96 98 ED CF BA E9 2 . f 99 95 97 o 13 CA F7 s e $ F4 B5 15 C4 A8 DE M F7 w \ 8D 00 C6 C2 b D3 / 7 w F2 ' BF CD 89 FF > D7 FB BC A2 S N FB A5 CD AF D3 F9 9D DF AE B5 17 CF 9D B7 , B9 9 ^ 7F [ 93 84 F7 } _ EA DF u \ 99 Z t E CA M EF 7 " AD FE 92 9E n 7F EB D8 C { 99 8B 9E w H BF B1 | g 9F F3 FA E1 - E5 CB BB x CF p 8B D2 w v EF w FA E2 F7 s C5 AC $ FC B4 DB BE G E4 DC F0 A0 96 F3 ! t DC FF % A5 CB A4 ^ AB D2 BD E7 9A E ' 08 + AF U 17 EB 8A w A7 N E4 A5 x 93 12 _ - ; 09 DD DF m 11 BE w \ } BA D3 t BC D9 97 9B C5 7F D8 H F1 D 7 8A ^ FA n F0 B8 W E6 84 5 - 8 B5 h o C4 F7 83 P 88 CB AE m t BB L 95 A9 s 90 A2 Y o DF K _ / l D2 D1 C9 91 ' E4 BD / / D 97 m BB E7 14 93 % C5 ; DD CF D8 : ~ B5 4 F FA U F0 8F w w DC FD 83 FC 13 EF w p DA A5 07 _ * - 1D 14 9D D5 84 F E6 F0 FF E4 15 w n A5 9F DE d AE F5 " - f D2 AE 96 1F # FA F1 x C1 L DF l M 06 8A E4 z DB 17 BA l DA e 15 CD 85 86 1F 09 82 h ] C6 { E7 C5 AF Z C5 B0 83 v D9 03 FC / ~ -
The message for which the hex dump is displayed, is a video message of size 4925 bytes. Below is the basic logging in my application:
*** Event sent to RTMP connector: Video - ts: 16777473 length: 4925. Waiting time: -57937, event timestamp: 16777473
14:28:02.045 [RtmpPublisher-workerThread] DEBUG o.r.s.s.consumer.ConnectionConsumer - Message timestamp: 16777473
14:28:02.045 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - Channel id: 5
14:28:02.045 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - Last ping time for connection: -1
14:28:02.045 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - Client buffer duration: 0
14:28:02.046 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - Packet timestamp: 16777473; tardiness: -30892; now: 1307104082045; message clock time: 1307104051152, dropLiveFuturefalse
14:28:02.046 [RtmpPublisher-workerThread] DEBUG o.r.s.n.r.codec.RTMPProtocolEncoder - !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!12b Wrote expanded timestamp field
14:28:02.046 [NioProcessor-22] DEBUG o.r.server.net.rtmp.BaseRTMPHandler - Message sent
I have captured the entire frame containing this message with wireshark, and annotated it a bit. You can find it here:
http://pastebin.com/iVtphPgU
The video message of 4925 bytes (hex 00 13 3D) is cut up into chunks of 1024 bytes (chunkSize 1024 set by Red5 client and sent to FMS). Indeed, after the 12-byte header and the 4-byte extended timestamp, there are 1024 bytes before the 1-byte header for the next chunk (hex C5). The chunks after that also contain 1024 bytes after the chunk header. This appears correct to me (though please correct me if I'm wrong).
When we look at the error message in the core log, the hex dump displayed also contains 1024 bytes, but it starts from the beginning of the message header. The last 16 bytes of the message chunk itself are not shown.
My question is this: is the hex dump in the error message always capped to 1024 bytes, or did FMS really read too little data?
Something that may be of help, is the reported 'too long' message length 11893407. This corresponds to hex B5 7A 9F, which can also be found in the packet, namely at row 0c60 (I've annotated it as [b5 7a 9f]. This location is exactly 16 bytes after the start of the 4th chunk data, not really a place to look for timestamps.
My assumptions during this bug hunting session were the following (would be nice if someone could validate these for me):
- message length, as specified in the RTMP 12 and 8-bit headers, defines the total number of data bytes for the message, NOT including the header of the first message chunk, its extended timestamp field, or the 1-byte headers for subsequent chunks. The behaviour is the same whether or not the message has an extended timestamp.
- chunk size, as set by the chunkSize message, defines the total number of data bytes for the chunk, not incuding the header or extended timestamp field. The behaviour is the same whether or not the message has an extended timestamp.
I believe I've chased this problem as far as I can without having access to the FMS 3.5 code, or at least being able to crank up the debug logging to the per-message level. I realize it's a pretty detailed issue and a long shot, but being able to publish a stream continuously 24/7 is critical for the project.
I would be very grateful if someone could have a look at this hex dump to see if the message itself is correct, and if so, to have a look at how FMS3.5.6 handles this.
Don't hesitate to ask me for more info if it can help.
Thanks in advance
Davy Herben
SolidityHello,
It took a bit longer than expected, but I have managed to create a minimal test application that will reproduce the error condition on all machines I've tested on. The application will simply read an H264 file and publish it to an FMS as a live stream. To hit the error condition faster, without having to wait 4.6 hours, the application will add a fixed offset to all timestamps before sending it to the FMS.
I have created two files:
http://www.solidity.be/publishtest.jar : Runnable java archive with all libraries built in
http://www.solidity.be/publishtest.zip : Zip file containing sources and libraries
You can run the jar as follows:
java -jar publishtest.jar <inputFile> <server> <port> <application> <stream> <timestampOffset>
- inputFile: path to an H264 input video file
- server: hostname or IP of FMS server to publish to
- port: port number to publish to (1935)
- application: application to publish to (live)
- stream: stream to publish to (output)
- timestampOffset: nr of milliseconds to add to the timestamp of each event, in hexadecimal format. Putting FFFFFF here will cause the server to reject the connection immediately, while FFFF00 or FFF000 will allow the publishing to run for awhile before the FMS kills it
Example of a complete command line:
java -jar publishtest.jar /home/myuser/Desktop/movie.mp4 localhost 1935 live output FFF000
Good luck with the bug hunting. Let me know if there is anything I can help you with.
Kind regards,
Davy Herben -
Sqlplus – spool data to a flat file
Hi,
Does any oracle expert here know why the sqlplus command could not spool all the data into a flat file at one time.
I have tried below command. It seems like every time I will get different file size :(
a) sqlplus -s $dbUser/$dbPass@$dbName <<EOF|gzip -c > ${TEMP_FILE_PATH}/${extract_file_prefix}.dat.Z
b) sqlplus -s $dbUser/$dbPass@$dbName <<EOF>> spool.log
set feedback off
set trims on
set trim on
set feedback off
set linesize 4000
set pagesize 0
whenever sqlerror exit 173;
spool ${extract_file_prefix}.datFor me, this is working. What exactly are you getting and what exactly are you expecting?
(t352104@svlipari[GEN]:/lem) $ cat test.ksh
#!/bin/ksh
TEMP_FILE_PATH=`pwd`
extract_file_prefix=emp
dbUser=t352104
dbPass=t352104
dbName=gen_dev
dataFile=${TEMP_FILE_PATH}/${extract_file_prefix}.dat
sqlplus -s $dbUser/$dbPass@$dbName <<EOF > $dataFile
set trims on
set trim on
set tab off
set linesize 7000
SET HEAD off AUTOTRACE OFF FEEDBACK off VERIFY off ECHO off SERVEROUTPUT off term off;
whenever sqlerror exit 173;
SELECT *
FROM emp ;
exit
EOF
(t352104@svlipari[GEN]:/lem) $ ./test.ksh
(t352104@svlipari[GEN]:/lem) $ echo $?
0
(t352104@svlipari[GEN]:/lem) $ cat emp.dat
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 09-DEC-82 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 12-JAN-83 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10
(t352104@svlipari[GEN]:/lem) $ -
How to read the content of "Transfer-Encoding: chunked" header
Can anybody tell me how to get or read the value of transfer encoding.
I got the HTTP Response header as "Transfer-Encoding: chunked".But i can't get the chunk size or the chunked data.
Without getting those details i cant read the content of the site.If Content-Length is in the HTTP header,i can read upto that length.But in this Transfer-Encoding case,i cant know any other details except the value "chunked".So suggest me to read the content of the site using Transfer-Encoding.
Message was edited by:
VeeraLakshmiI used HTTPURLConnection also.If i use that am getting the values in request headers only and not in Response headers.So i cant read the content.
Then i went through RFC 2616.There i can only understand about chunked transfer encoding.Still i cant get any idea to know the chunk-size and the chunked data of the transfer encoding.Because i am getting the HTTP Header Response as "Transfer-Encoding: chunked".Below that am not getting the size and data.If i know the size or data,i can proceed by converting the hex into bytes and i can read. -
Apache Proxy Plug-in with WebLogic 8.1 SP5 - Transfer Encoding:Chunked
Hello All,
Configuration: Apache 2.0.48
Plugin - mod_wl128_20.so
WebLogic Version - 8.1 SP5
There is no SSL between Apache and WebLogic server.
Apache seems to have issue when the response from WebLogic has: Hdrs from WLS:[Transfer-Encoding]=[chunked]
I turned on the debugging on Apache side with DebugAll flag and WLS sends data to Apache plug-in.
Is is a known issue? Is there any CR to fix it? Please let me know if you need further details.
Any help is appreciated.
Thanks,
JananiHi Vishwas,
Thank you for the reply. I forgot to mention that Apache and WebLogic are on Solaris 9 platform.
Accesing a webapp hosted on WebLogic through Apache->plug-in->WebLogic return 500 internal server error, but other webapps hosted on the same WebLogic domain works properly. Looking at the Response Hdrs from WebLogic shows that WLS returns transfer-encoding=chunked. The other webapps which work properly has content-length set and transfer-encoding is not chunked.
So, the question is does Apache Plug-in for weblogic 8.1 SP5 read the chunked data properly?
Thanks,
Janani -
Decompressing BytesMessage in JMS
Hi,
I am currently facing problems during decompression of bytesmessage data.
What i am trying to do:
Compress data and publish it onto Tibco JMS channel as bytesMessgae using weblogic server (Tibco JMS is configured as foreign jms server in weblogic).
Decompress the bytesMessage data on the subscriber side.
The same set of compression and decompression components work fine as individual components as a plain java program.
i.e. compress some inmemory data and then uncompress them is working fine.
But, when i publish at one end and then subscribe and try to uncompress, i get errors as below.
Guys, can you please pour me in your expert thoughts on this. This is vey critical for my project which is right now in integration phase.
Error:
DataFormatException: java.util.zip.DataFormatException: unknown compression method
I am using
java.util.zip.Inflater;
java.util.zip.Deflater;
for my compression/decompression logic.
Thanks,
Kiran KumarHi Kiran,
The problem is likely with the app, the JVM, or Tibco, since WL isn't directly in the message flow. You might want to check to see if Tibco is somehow changing the contents of your message, or if your app is incorrectly serializing/deserializing the compressed data into the message. Of course, you can also try posting to Tibco newsgroups.
Tom, BEA
P.S. This probably doesn't help you much, but as an FYI, WebLogic 9.0 JMS provides a built-in automatic message compress/decompress feature.
Maybe you are looking for
-
When I go to system preferences and click the lock to make changes, or when I try to download new updates and such, the name and password section are both blank and when I type in the administrators name and password it says "please try again." So, I
-
Dear all, What is the Bill of exchange process please explain me thanks in advance. Sriram.
-
I have one doubt regarding Essbase ports usage. If I define 1000 to 2000 ports range for Essbase Server in config file-. If some other application (Not Essbase) is running one of these port say 1200- I start my Essbase server.. one application tries
-
Adding a wallpaper image?
To Dps Users, anyone know if its possible to save a wallpaper image into the photo album of the ipad via the dps tools. Regards Vividi
-
Writing a whole class instance to a file
Is it possible to write an intance of a class to a file whole? My problem is as follows I have a complicated class system for the piece of software i am currently writing this has vector classes upon vector classes upon vector classes etc. The top cl