Http Data Compression

Is there any Http Data Compression support in WLS 6.1 or 7.0 ?
There are tools for the IIS and Apache server. This helps the network
performance and downloading time.
www.ehyperspace.com
http://www.innermedia.com/Products/SqueezePlay_IIS_Real-Time_Web_/squeezepla
y_iis_real-time_web_.htm
thanks
/selvan

There are no generic solutions for Weblogic 5.1.
We support filter-like functionality for Weblogic 5.1 with our EnGarde
software, but we only provide it through OEM contracts (no direct sales).
Sorry.
You can use a "front component" to route all requests to other servlets/JSPs
yourself, but if you do substitution with a "front component", you'll have
to extend the WL classes themselves (request, response), which gets tricky.
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Selvan Ramasamy" <[email protected]> wrote in message
news:[email protected]..
Yes, I totally forgot about the filters ... Thank you .
What will be your suggestion for the Weblogic 5.1 server ? As most of my
customers are using the weblogic 5.1.
thanks
"Cameron Purdy" <[email protected]> wrote in message
news:[email protected]..
Cameron, how can I do this so that I don't have change all of my jspsand
servlets ?
Should I plug a custom ServletResponse to do this ?In 6.1 (maybe) or 7.0 you can use a filter, which is like a Servlet that
substitutes its own Request and/or Response object.
Peace,
Cameron Purdy
Tangosol, Inc.
http://www.tangosol.com/coherence.jsp
Tangosol Coherence: Clustered Replicated Cache for Weblogic
"Selvan Ramasamy" <[email protected]> wrote in message
news:[email protected]..
>

Similar Messages

  • Using Data Compression on Microsoft SQL 2008 R2

    We have a very large database which keeps growing and growing. This has made our upgrade process extremely troublesome because the upgrade wizard seems to require close to 3 times the database size of free space to even start.
    As such, we are considering activating DATA COMPRESSION on the PAGE level in Microsoft SQL Server 2008 R2. This technology is native to the SQL Server and compresses the rows and the pages so that they do not take up more space than necessary.
    Traditionally each row take up the space of the maximum of all the fields even though only part of a field is filled with data.
    [Blog about Data Compression|http://blogs.msdn.com/b/sqlserverstorageengine/archive/2007/11/12/types-of-data-compression-in-sql-server-2008.aspx]
    Our idea is to use this initially on the Axxx tables (historic data) to minimize the space they take by using for example:
    ALTER TABLE [dbo].[ADO1] REBUILD PARTITION = ALL
    WITH (DATA_COMPRESSION = PAGE)
    On a test database we have seen tables go from 6GB of space to around 1,5GB which is a significant saving.
    MY QUESTION: Is this allowed to do from SAP point of view? The technology is completely transparent but it does involve a rebuild of the table as demonstrated above.
    Thanks.
    Best regards,
    Mike

    We are using Simple recovery model, so our log files are pretty small.
    Our database itself is about 140GB now and it keeps growing.
    We've also reduced the history size to about 10 versions.
    Still, some of our tables are 6-10GB.
    Some of the advantages of data compression is to also that it improves disk I/O at the cost of slightly higher CPU, which we are pretty sure our server can handle.
    Mike

  • Adobe Air needs HTTP gzip compression

    Hello
    We are developing an Adibe Air application. We use SOAP for
    service calls and we depend entirely upon gzip HTTP compression to
    make the network performance even vaguely acceptable. SOAP is such
    a fat format that it really needs gzip compression to get the
    responses down to a reasonable size to pass over the Internet.
    Adobe Air does not currently support HTTP gzip compression
    and I would like to request that this feature be added ASAP. We
    can't release our application until it can get reasonable network
    performance through HTTP gzip compression.
    Thanks
    Andrew

    Hi blahxxxx,
    Sorry for the slow reply -- I wanted to take some time to try
    this out rather than give an incomplete response.
    It's not built into AIR, but if you're using
    Flex/ActionScript for your application you can use a gzip library
    to decompress a gzipped SOAP response (or any other gzipped
    response from a server -- it doesn't have to be SOAP). Danny
    Patterson gives an example of how to do that here:
    http://blog.dannypatterson.com/?p=133
    I've been prototyping a way to make a subclass of the Flex
    WebService class that has this built in, so if I can get that
    working it would be as easy as using the Flex WebService component.
    I did some tests of this technique, just to see for myself if
    the bandwidth savings is worth the additional processing overhead
    of decompressing the gzip data. (The good news is that the
    decompression part is built into AIR -- just not the specific gzip
    format -- so the most processor-intensive part of the gzip
    decompression happens in native code.)
    Here is what I found:
    I tested this using the
    http://validator.nu/ HTML validator
    web service to validate the HTML source of
    http://www.google.com/. This
    isn't a SOAP web service, but it does deliver an XML response
    that's fairly large, so it's similar to SOAP.
    The size of the payload (the actual HTTP response body) was
    5321 bytes compressed, 45487 bytes uncompressed. I ran ten trials
    of each variant. All of this was done in my home, where I have a
    max 6Mbit DSL connection. In the uncompressed case I measured the
    time starting immediately after sending the HTTP request and ending
    as soon as the response was received. In the compressed case I
    started the time immediately after sending the HTTP request and
    ended it after receiving the response, decompressing it and
    assigning the compressed content to a ByteArray (so the compressed
    case times include decompression, not just download). The average
    times for ten trials were:
    Uncompressed (text) response: 1878.6 ms
    Compressed (gzip) response: 983.1
    Obviously these will vary a lot depending on the payload
    size, the structure of the compressed data, the speed of the
    network, the speed of the computer, etc. But in this particular
    case there's obviously a benefit to using gzipped data.
    I'll probably write up the test I ran, including the source,
    and post it on my blog. I'll post another reply here once I've done
    that.

  • Is data compression all automatic? Or does manual steps occur in HANA Studio

    Hi all,
    I'm new to HANA and have been leveraging SCN in a BIG way to answer my questions (thanks everyone for your contributions). This is my first post as I was unable to find an answer to my current question .
    I've been reading up on data compression in HANA and I learned that there are different techniques, such as Dictionary, Cluster, and Run-length encoding.
    My Question is: Is the compression automatic? Or does it need to be modeled within HANA Studio? Let's use Dictionary Encoding as an example. Are there algorithms in place within HANA that will automatically aggregate values so only distinct values remain? Will the attribute vector  and inverted indexed tables automatically be created?
    Just as some background, this is what I am basing my question on:
    http://www.agilityworks.co.uk/our-blog/demystifying-the-column-store-%E2%80%93-viewing-column-store-statistics-in-sap-ha…
    Thank you!
    Kyle

    Hi Justin,
    you are right, the compression is related to the delta merge - and therefore, as long as delta merges happen automatically, compression will also happen automatically.
    SAP HANA has two compression stages on the column store: the first - and typically dominating one - is the implicit compression obtained from using a data dictionary for all column-store columns.
    The second stage is often called "sparse compression" - it offers additional compression on top of the dictionary compression, with several available algorithms. During the "optimize compression" run, the most appropriate compression algorithm is chosen for each column (and some columns may not be sparse-compressed at all, because it would not bring a benefit).
    The optimize compression run does not happen with every delta merge. Instead, it will be performed with the first delta merge on a given table, and then only if the data content of the table has changed significantly (typically, if I remember correctly, the system waits until the number of records has doubled. Note that the optimize compression run may _change_ the compression type chosen for a given column. The sparse compression itself will be done with every delta merge, using the algorithms selected in the past optimze compression.
    If you have switched off automatic delta merging globally, or if you have disabled automatic delta merging for a given table, there will also be no automatic compression (in the entire system or on the given table). BW on HANA uses the smart merge feature, so that in a standard BW on HANA, automatic compression will happen (with the timing of the delta merge being decided in a cooperation between BW and HANA).
    Best,
    Richard

  • Data compression can have negative impact on application ?

    Hi,
    They are going to analyse table/index structure & go ahead with data compression in order to speed up the performance.
    We have been asked to verify if data compression will have affect the application or not?
    For ex: we run one process which we run through application which rebuild index of a big table. So I may put this thing in effect.
    But could please help me which all areas I should focus and investigate before asking them to proceed for data compression?
    -Vaibhav Chaudhari

    This article will give you most of the answers:
    http://technet.microsoft.com/en-us/library/dd894051(v=sql.100).aspx

  • File Adapter Data Compression

    I'd like to extend file adapter behavior to add data compression features like unzip after read file and zip before write file. I read oracles's file adapter documentation but i didn't find any extension point

    if its java mapping, just create a DT with any structure as you wish.
    ex.
    DT_Dummy
    |__ Dummy_field
    java mapping does not validate the xml against the DT you created

  • Multiple XML HTTP Data sources sequentially

    Hi,
    We are creating a report using multiple XML HTTP Data sources sequentially.
    The Report uses multiple XML HTTP Data sources sequentially; the report creation fails or delayed. We would like to join the datasources after accessing from the HTTP Server.
    At the same time the same XML files when locally accessing as data sources, the report is getting created.
    Is there any alternate solution for XML HTTP Data Source Access. Or in which way I can proceed with this.
    Thanks,
    Unni

    I am not able to use datasets here.
    The context is as below.
    I am running an HTTP server (XML-RPC program),which will generate an XML output as defined with a local XML Schema.
    Here I have multiple HTTP requests ,each will generate multiple XML outputs.
    I am using XML and WebServices Connection where I will provide like below,
    (Please note file1,file2 are normal text files with data corresponds to xsds.)
    HTTP(s) XML URL : http://localhost:8002/ReadTable?name=file1
    Local Schema File : E:\Test\file1.xsd
    HTTP(s) XML URL : http://localhost:8002/ReadTable?name=file2
    Local Schema File : E:\Test\file2.xsd
    Like this I am able to create multiple XML outputs through HTTP requests.(only file1),which means while creating reports the sequential requests are not handling properly.
    My report will join say two of theese files. But at the time of report creation it will ask of only runtime parameter of first request in the join.
    Here The report will generate only with data from first text(data) file.(file1)
    Hope you got a clarity about the proble.

  • How to find data compression and speed

    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.

    Rajarshi Muhuri wrote:
    1. whats the command/way for viewing how much space the data has taken on its HANA tables as oposed to the same data in disk . I mean how do people measure that there has been a  10:1 data compression.
    The data is stored the same way in memory as it is on disk. In fact, scans, joins etc. are performed on compressed data.
    To calculate the compression factor, we check the required storage after compression and compare it to what would be required to save the same amount of data uncompressed (you know, length of data x number of occurance for each distinct value of a column).
    One thing to note here is: compression factors must always be seen for one column at a time. There is no such measure like "table compression factor".
    > 2. The time taken for execution, as per seen from executing a same SQL on HANA varies ( i see that when i am F8 ing the same query repeatedly) , so its not given in terms of pure cpu cycles , which would have been more absolute .
    >
    > I always thought  that there must a better way of checking the speed of execution  like checking the log which gives all data regarding executions , than just seeing the output window query executions.
    Well, CPU cycles wouldn't be an absolute measure as well.
    Think about the time that is not  spend on the CPU.
    Wait time for locks for example.
    Or time lost because other processes used the CPU.
    In reality you're ususally not interested so much in the perfect execution on one query that has all resources of the system bound to it, but instead you strive to get the best performance when the system has it's typical workload.
    In the end, the actual response time is what means money to business processes.
    So that's what we're looking at.
    And there are some tools available for that. The performance trace for example.
    And yes, query runtimes will always differ and never be totally stable all the time.
    That is why performance benchmarks take averages for multiple runs.
    regards,
    Lars

  • Data compression in xi

    Hi ,
    How do you do data compression in xi?
    thanks in advance,
    Ramya Shenoy

    Hi Ramya,
                    Are you talking about the archiving of the messages in the XI server. Or compressing individual XI message as parteek has explained in his reply uisng the PayloadZipBean.
    Thanks
    Ajay

  • DB2 : data compression active ?

    Hi forum,
    how can I find out whether data compression is active in our DB2 database with SAP BW ?
    Kind regards
    S0007336574

    Hello Olaf,
    to check whether you are entitled for compression license-wise, use the following:
    db2licm -l
    Check for the storage optimization feature.
    To check, whether there are any tables that have the compression flag enabled
    db2 "select count(*) from syscat.tables where compression = 'R' or compression = 'B'"
    To get the names of these tables
    db2 "select tabname, compression from syscat.tables where compression = 'R' or compression = 'B'"
    "R" means row compression, "B" means both Row and Value compression.
    To check the actual compression savings for tables, you can use this:
    --> Be aware that it may run for a longer time
    --> Replace the SAPHJZ by your own SAPSID
    SELECT substr(TABNAME,1,25) as TABLE_NAME,
         DICT_BUILDER,
         COMPRESS_DICT_SIZE,
         EXPAND_DICT_SIZE,
         PAGES_SAVED_PERCENT,
         BYTES_SAVED_PERCENT
            FROM SYSIBMADM.ADMINTABCOMPRESSINFO
         WHERE tabschema='SAPHJZ'
         AND COMPRESS_DICT_SIZE > 0
         order by 1 asc
    Hope this helps.
    Hans-Jürgen

  • Data Compression express VI chops off last waveform element

    While using the data compression express vi, I noticed the vi truncates the array of waveforms by removing the last element. 
    When I log this to an excel file, it puts a blank row into it.  I ran this VI twice which appended two runs in teh same log file to show it.
    What is an easy way to delete this last element out of the signal array?
    See the graphs last sample on the x axis of the vi I attached.  Element at x=1.0 has been deleted on the graph thus resulting in a blank row in my excel file.

    Hi Id,
    I think this happens because it is using the data that comes after the point to average that point, though I am uncertain. I will have to look into it further.

  • Error in FI data compression

    Hi Gurus,
    I have made summarization in obcy as per 36353 note,uploaded the program(zttypv) as per correction instruction & made vbrk table -``/field - ``
    I created billing document but I am getting this error when try to release the document through t/code VFX3.
    The detail of the error at VFX3
    9056500667 000000 Error in FI data compression
         LongText
             Detail
              Diagnosis
                  The data in the FI document should be summarized via field
                  ''. However, the system could not find this field in the
                  internal structure (see 'LFACIGEN').
              System Response
                  This error stems from inconsistencies between the data base
                  tables of the FI document and the internal summarization
                  structure.
              Procedure
                  Start program 'SAPFACCG', in order to reset the internal
                  structure, and then check whether field '' is in
                  'LFACIGEN'. enthalten ist.
    I HAVE TRIED THE PROGRAM "SAPFACCG"  TO EXECUTE
    BUT IT IS NOT,IT IS SAYING IT NEED TO BE RESTARTED NOW HOW TO RESTART???
    REPORT SAPFACCG MESSAGE-ID F5.
    Generating compression structure P_ACC in LFACIGEN.
    This program has to be restarted in case of problem with compression
    (Message F5 843)
      CALL FUNCTION 'FI_DOCUMENT_INIT'.
    Is there anyway to resovlve this issue
    Thanks in advance
    Umed

    Hi
    I have uploaded the program(zttypv) as per correction instruction & which has made in TTYPV as object type-vbrk, table -``/field - ``,now same thing is reflecting in OBCY as vbrk table -``/field - ``
    umed

  • How do I do data compression when message is sent over RMI-IIOP

    Hi,
    Is there a way one can do data compression/de-compression when message is sent over RMI-IIOP ?
    Regards,
    Sourav

    To elaborate on Tammy's idea, you could use for instance C:\Users\Public at the place where you initially put your Excel file to make sure this is found on the target computer. I would consider this a workaround though.
    Or for Lumira documents that you already designed, change the location of the Excel file and use Data/Edit from your computer, then save the Lumira documents before sending them to the target audience. 
    From my humble opinion, the product should allow to use Data/Edit and change the source file even if the initial file path is no longer found. This should be possible for your target audience.
    Antoine

  • Re: Data Compression

    -----Original Message-----
    From: Jose Suriol <[email protected]>
    To: 'Forte mail list' <[email protected]>
    Date: Friday, February 27, 1998 1:00 PM
    Subject: Data Compression
    >
    Thanks to all who replied to my post about Fort

    >
    Thanks to all who replied to my post about Forte compressing
    data before sending them to the network. It appears Forte tries to
    minimize the size of certain data types but does not do compression
    across the board. As I understand Forte Version 4 will probably
    support the Secure Sockets Layer (SSL) which has a data compression
    option, but unfortunately SSL is, first and foremost, a secureprotocol,
    and while compression is optional, encryption (a CPU intensive process)
    in not.
    Encryption, integrity and compression are all optional in SSL.
    Its possible to request a connection that only has compression, assuming
    that the
    other side agrees.
    Derek

  • Message Data Compression

    I'm not exactly new to java, I've just been away from it for a few years.
    I'm trying to create an XML-based messaging system for communication between a suite of applications. XML being what it is (verbose text) I want to apply data compression. (In the short term, during development, the messages will be between components on a single machine. Ultimately, the applications will likely run on many machines and use the internet to communicate with each other.)
    I was looking at the java.util.zip tools, but I'm not actually creating files. I thought I could use just the ZipEntry part, but it's not coming together well. I was also thinking some flavor of SOAP might serve my needs, but SOAP has come onto the scene during my absence from development activities. I've got to familiarize myself with it a bit more before I can assess whether or not it fits my needs.
    I'm open to suggestions as to how I should approach this. All ideas anyone cares to share are greately appreciated.
    - Patrick

    The system will probably use a combination of RMI and JMS, but that's not anything I want to bring into the question at hand.
    The only problem I'm concerned about right now is, "How do I compress a packet of data?" What I do with that packet of compressed data is a problem for a different level of abstraction. I've got a fairly large buffer of XML that I want to compress before passing off to another entity to act upon. What's the best way to do that?
    - Patrick

Maybe you are looking for

  • ? Flash CS4, Is it possible to adjust the length of multiple timelines at once?

    Hi, I am using Flash CS4. Is there any way to adjust the length of multiple timelines at once? I am doing an animation of leaves falling from a tree. The leaves alone have 17 separate timelines on separate layers plus each has it's own guide layer. I

  • First & Second BBP terms appearing on Year End Bill  instead of first only

    Hi All, During running of EA19 the u2018oldu2019 BBP plan gets closed while a new one is being created. The amount to be paid on the invoice is the sum of costs of the billing period + BBP term. Normally only 1 BBP term is present on the invoice. Iss

  • How to make a tabstrip visible and invisible again?

    Hi I have a GUI with Fields, Texts, Checkboxes, Radio buttons, a Button and of course a tabstrip with 7 tabs. Before I want to show some data in the tabs of the tabstrip, I have to fill the fields, check a radio button and checkboxes. After filled al

  • Authorization check in POWL

    Hello! I am trying to perform an authority check in the constructor of the POWL feeder class. If the authorization fails, I do not want all the methods of the feeder class to be executed at all. Is there a way to do this, other than handling all the

  • Essbase 11.1.2 client only installation

    Hi There, I would like to install Essbase 11.1.2 client only. Are these the only files I need? 1. Hyperion Performance Management System Foundation Services (all 4 files) 2. Oracle Essbase Client release 11.1.2 for windows. Could you please confirm t