Criticism of new data "optimization" techniques

On February 3, Verizon announced two new network practices in an attempt to reduce bandwidth usage:
Throttling data speeds for the top 5% of new users, and
Employing "optimization" techniques on certain file types for all users, in certain parts of the 3G network.
These were two separate changes, and this post only talks about (2), the "optimization" techniques.
I would like to criticize the optimization techniques as being harmful to Internet users and contrary to long-standing principles of how the Internet operates. This optimization can lead to web sites appearing to contain incorrect data, web sites appearing to be out-of-date, and depending on how optimization is implemented, privacy and security issues. I'll explain below.
I hope Verizon will consider reversing this decision, or if not, making some changes to reduce the scope and breadth of the optimization.
First, I'd like to thank Verizon for posting an in-depth technical description of how optimization works, available here:
http://support.vzw.com/terms/network_optimization.html
This transparency helps increase confidence that Verizon is trying to make the best decisions for their users. However, I believe they have erred in those decisions.
Optimization Contrary to Internet Operating Principles
The Internet has long been built around the idea that two distant servers exchange data with each other by transmitting "packets" using the IP protocol. The headers of these packets contain the information required such that all the Internet routers located between these servers can deliver the packets. One of the Internet's operating principles is that when two servers set up an IP connection, the routers connecting them do not modify the data. They may route the data differently, modify the headers in some cases (like network address translation), or possibly, in some cases, even block the data--but not modify it.
What these new optimization techniques do is intercept a device's connection to a distant server, inspect the data, determine that the device is downloading a file, and in some cases, to attempt to reduce bandwidth used, modify the packets so that when the file is received by the device, it is a file containing different (smaller) contents than what the web server sent.
I believe that modifying the contents of the file in this matter should be off-limits to any Internet service provider, regardless of whether they are trying to save bandwidth or achieve other goals. An Internet service provider should be a common carrier, billing for service and bandwidth used but not interfering in any way with the content served by a web server, the size or content of the files transferred, or the choices of how much data their customers are willing to use and pay for by way of the sites they choose to visit.
Old or Incorrect Data
Verizon's description of the optimization techniques explains that many common file types, including web pages, text files, images, and video files will be cached. This means that when a device visits a web page, it may be loading the cached copy from Verizon. This means that the user may be viewing a copy of the web site that is older than what the web site is currently serving. Additionally, if some files in the cache for a single web site were added at different times, such as CSS files or images relative to some of the web pages containing them, this may even cause web pages to render incorrectly.
It is true that many users already experience caching because many devices and nearly all computer browsers have a personal cache. However, the user is in control of the browser cache. The user can click "reload" in the browser to bypass it, clear the cache at any time, or change the caching options. There is no indication with Verizon's optimization that the user will have any control over caching, or even knowledge as to whether a particular web page is cached.
Potential Security and Privacy Violations
The nature of the security or privacy violations that might occur depends on how carefully Verizon has implemented optimization. But as an example of the risk, look at what happened with Google Web Accelerator. Google Web Accelerator was a now-discontinued product that users installed as add-ons to their browsers which used centralized caches stored on Google's servers to speed up web requests. However, some users found that on web sites where they logged on, they were served personalized pages that actually belonged to different users, containing their private data. This is because Google's caching technology was initially unable to distinguish between public and private pages, and different people received pages that were cached by other users. This can be fixed or prevented with very careful engineering, but caching adds a big level of risk that these type of privacy problems will occur.
However, Verizon's explanation of how video caching works suggests that these problems with mixed-up files will indeed occur. Verizon says that their caching technology works by examining "the first few frames (8 KB) of the video". This means that if multiple videos are identical at the start, that the cache will treat them the same, even if they differ later on in the file.
Although it may not happen very frequently, this could mean that if two videos are encoded in the same manner except for the fact that they have edits later in the file, that some users may be viewing a completely different version of the video than what the web server transmitted. This could be true even if the differing videos are stored at completely separate servers, as Verizon's explanation states that the cataloguing process caches videos the same based on the 8KB analysis even if they are from different URLs.
Questions about Tethering and Different Devices
Verizon's explanation says near the beginning that "The form and extent of optimization [...] does not depend on [...] the user's device". However, elsewhere in the document, the explanation states that transcoding may be done differently depending on the capabilities of the user's device. Perhaps a clarification in this document is needed.
The reason this is an important issue is that many people may wish to know if optimization happens when tethering on a laptop. I think some people would view optimization very differently depending on whether it is done on a phone, or on a laptop. For example, many people, for, say, business reasons, may have a strong requirement that a file they downloaded from a server is really the exact file they think they downloaded, and not one that has been optimized by Verizon.
What I would Like Verizon To Do
With respect to Verizon's need to limit bandwidth usage or provide incentives for users to limit their bandwidth usage, I hope Verizon reverses the decision to deploy optimization and chooses alternate, less intrusive means to achieve their bandwidth goals.
However, if Verizon still decides to proceed with optimization, I hope they will consider:
Allowing individual customers to disable optimization completely. (Some users may choose to keep it enabled, for faster Internet browsing on their devices, so this is a compromise that will achieve some bandwidth savings.)
Only optimizing or caching video files, instead of more frequent file types such as web pages, text files, and image files.
Disabling optimization when tethering or using a Wi-Fi personal hotspot.
Finally, I hope Verizon publishes more information about any changes they may make to optimization to address these and other concerns, and commits to customers and potential customers about their future plans, because many customers are in 1- or 2-year contracts, or considering entering such contracts, and do not wish to be impacted by sudden changes that negatively impact them.
Verizon, if you are reading, thank you for considering these concerns.

A very well written and thought out article. And, you're absolutely right - this "optimization" is exactly the reason Verizon is fighting the new net neutrality rules. Of course, Verizon itself (and it's most ardent supporters on the forums) will fail to see the irony of requiring users to obtain an "unlimited" data plan, then complaining about data usage and trying to limit it artificially. It's like a hotel renting you a room for a week, then complaining you stayed 7 days.
Of course, it was all part of the plan to begin with - people weren't buying the data plans (because they were such a poor value), so the decision was made to start requiring them. To make it more palatable, they called the plans "unlimited" (even though at one point unlimited meant limited to 5GB, but this was later dropped). Then, once the idea of mandatory data settles in, implement data caps with overages, which is what they were shooting for all along. ATT has already leapt, Verizon has said they will, too.

Similar Messages

  • Data uploading techniques//LSMW

    Dear Experts,
    As i am very new to SAP HCM. Can any one explain about data uploading techniques in HCM point of view.
    ANd what is BDC,PDC
    Thanks in Advance
    Ram

    http://wiki.scn.sap.com/wiki/display/ABAP/Batch+Input+-+BDC
    SAP ECC - Plant Data Collection - Time, Attendance and Employee Expenditures (HR-PDC)

  • Data Migration techniques

    Hi Experts,
    I want to know about data migration techniques and how we can best use MDM while migrating old version of R/3 to new version of R/3....
    I have implemented SAP MDM in cases where we have number of SAP R/3 instances across different region and we were taking data from different R/3 one by one and doing data stan'zation,consolidation, Harmonization and all... I am not talking about all this...
    There was a good explanation from <b>Markus Ganser</b> about <b>Duplicate data and identical data</b>... I know all this..but my question is when I have <b>only one SAP R/3</b> and I still want to implement MDM solution while migrating my old R/3 instance to new one, How can I proceed in this scenario? What is the data migration technique...
    I know the common answer will be to use MDM as a middle ware..take Master data from old instance and after consolidation, send it back to new instance and at the same time send tran'data directly to new version... But is this worth doing this? Is there any other approach?
    If there is any document on this or any one have idea about data migration techniques while implementing MDM solution than send me documents on [email protected].......
    In short, I am looking for below 3 points while doing migration along with SAP MDM
    <b><b>Data Migration techniques</b>
    <b>Prerequisites</b>
    <b>Methodology in this kind of scenario</b>
    Step by step procedure</b>
    cheers,
    R.n

    hi..
    here i am sending u the link for complete Data Migration Life Cycle
    <a href="http://www.redwoodsystems.co.uk/dataMWhitePaper.html#links">http://www.redwoodsystems.co.uk/dataMWhitePaper.html#links</a>
    hope it might be of any use to u
    thank you & reward points if useful
    Message was edited by:
            Dasari Narendra

  • DSO New data table rejects data

    Dear SDNers,
    I have a critical issue.
    I am loading data into a DSO (which has end routine, lookup DSO of 2crore records in production)
    In development, it worked good.
    In Production it took very long time to load data. I initially thought it might be because of the lookup DSO. But it never loaded the data into new data table. The data load monitor results in yellow always.
    I used filter in DTP and loaded only limited data. But this also not loading data. The data load monitor results in yellow again.
    Then I found that i am unable to access the new data table's data browser (from manage).
    I checked with se11 also, I am able to see the table. But if I click contents, the system hangs.
    So what I observe is New data table is not allowing to post any new entries?
    Kindly give me some insight regarding solving this issue.
    Thanks,
    Guru

    Hi Prasanth,
    I didnt say as i am able to see data in change log. I said I am able to access the data browser of both Active and change log table.
    and @Saveen, I donot want to load data manually to new data table.
    To be Precise i ll answer pransanth questions here..
    What is the status of the load in the monitor screen. Yellow (still running)
    Is the load completed or not? what is the record count? No the load is not completing.
    Is the request active in the DSo and available for reporting? Request is active
    Are you facing this issue for the first time? Yes for the first time and only in this DSO.
    Are you sure you have authorizations to check the data through manage screen, try to run the Authorization trace on your id and check if the roles are there for your id or not?  Yes, I am able to see active data table, change log etcc. even new table of other DSOs. Only new table of this DSO is inaccessible from manage
    See, I used filter in DTP, so it ll bring only limited records from lookup DSO. So no issue with the performance deadlock.
    And again I simulated the load and saw, Result package gets filled up perfectly, So the code also works fine.
    but if I load the data itll be yellow status with no records added in new table.
    In this scenario, New table is inaccesible from manage, So I am pretty sure that both are interrelated issues.
    Can some one help me pls..
    Thanks,
    Guru

  • Announcing 3 new Data Loader resources

    There are three new Data Loader resources available to customers and partners.
    •     Command Line Basics for Oracle Data Loader On Demand (for Windows) - This two-page guide (PDF) shows command line functions specifc to Data Loader.
    •     Writing a Properties File to Import Accounts - This 6-minute Webinar shows you how to write a properties file to import accounts using the Data Loader client. You'll also learn how to use the properties file to store parameters, and to use the command line to reference the properties file, thereby creating a reusable library of files to import or overwrite numerous record types.
    •     Writing a Batch File to Schedule a Contact Import - This 7-minute Webinar shows you how to write a batch file to schedule a contact import using the Data Loader client. You'll also learn how to reference the properties file.
    You can find these on the Data Import Resources page, on the Training and Support Center.
    •     Click the Learn More tab> Popular Resources> What's New> Data Import Resources
    or
    •     Simply search for "data import resources".
    You can also find the Data Import Resources page on My Oracle Support (ID 1085694.1).

    Unfortunately, I don't believe that approach will work.
    We use a similar mechanism for some loads (using the bulk loader instead of web services) for the objects that have a large qty of daily records).
    There is a technique (though messy) that works fine. Since Oracle does not allow the "queueing up" of objects of the same type (you have to wait for "account" to finish before you load the next "account" file), you can monitor the .LOG file to get the SBL 0363 error (which means you can submit another file yet (typically meaning one already exists).
    By monitoring for this error code in the log, you can sleep your process, then try again in a preset amount of time.
    We use this allow for an UPDATE, followed by an INSERT on the account... and then a similar technique so "dependent" objects have to wait for the prime object to finish processing.
    PS... Normal windows .BAT scripts aren't sophisticated enough to handle this. I would recommend either Windows POWERSHELL or C/Korn/Borne shell scripts in Unix.
    I hope that helps some.

  • Architectural Design - New Data Warehouse

    Hello All,
    This is my first post to the oracle discussion forums and I'm looking forward to the interactions with other ODWB users.
    I am just begining to implement a design for a new data warehouse, our team has already defined user requirements for a subset of the business (Sales/Marketing) and have committted a logical model to paper. We have installed our dev environment and are now ready to begin the work of creating our prototype.
    I've read all the Oracle doc I can get my hands on regarding implementing your DW objects and have been pondering the approach. ROLAP or MOLAP.....
    it seems to make sense that we should deploy into a ROLAP environment bringing in all our data from our staging area to create a stable relational data store. Then select most used or queried dimensions and facts to deploy in a MOLAP environment... has anyone used this approach? any lessons learned? do you have to choose one method or the other? or can you take a blended approach ? would you deploy both in the same database instance or seperate the two?
    thx

    I'm somewhat new to OWB coming from an Informatica background but in our environment, we are doing the same thing. Our Enterprise Data Warehouse will be based on ROLAP and I intend to use MOLAP for subsets of the EDW.
    Dimensions in Oracle are somewhat interesting in that they are "leveled" and you can tie cubes or "fact tables" to any level of the dimension. This is a bit un-Kimball-like and has taken some getting used to. I think it is a powerful feature but I will have to experiment some until I understand it better.
    One critical bug with 10.2 I've run into is with dimension roles - The time dimension for instance. Typically this is one table that is aliased many many times. If you exceed roughly 5 roles for the time dimension, the generation of the object fails since OWB generates a single anonymous PL/SQL block that exceeds 64k. Its a documented bug in development with no workaround according to metalink.
    Other gotchas are that table changes always try to generate "create table" scripts even if you only add an index or change parallelism. We have had to do table maintenance outside OWB and then keep the metadata in sync up until now.
    I haven't done any of the MOLAP yet but from what I read there are some restrictions - such as you can't have roles on dimensions for MOLAP and I believe you can't have SCDs in MOLAP. I don't know how Time dimensions are handled in MOLAP without roles! Do people really generate tables for every single time dimension in OWB???
    Hope you share your experiences here!
    - Mike Taylor

  • What are the Optimization Techniques?

    What are the Optimization Techniques? Can any one send the one sample program which is having Good Optimization Techniques.
    Phani

    Hi phani kumarDurusoju  ,
    ABAP/4 programs can take a very long time to execute, and can make other processes have to wait before executing. Here are
    some tips to speed up your programs and reduce the load your programs put on the system:
    Use the GET RUN TIME command to help evaluate performance. It's hard to know whether that optimization technique REALLY helps
    unless you test it out. Using this tool can help you know what is effective, under what kinds of conditions. The GET RUN TIME
    has problems under multiple CPUs, so you should use it to test small pieces of your program, rather than the whole program.
    Generally, try to reduce I/O first, then memory, then CPU activity. I/O operations that read/write to hard disk are always the
    most expensive operations. Memory, if not controlled, may have to be written to swap space on the hard disk, which therefore
    increases your I/O read/writes to disk. CPU activity can be reduced by careful program design, and by using commands such as
    SUM (SQL) and COLLECT (ABAP/4).
    Avoid 'SELECT *', especially in tables that have a lot of fields. Use SELECT A B C INTO instead, so that fields are only read
    if they are used. This can make a very big difference.
    Field-groups can be useful for multi-level sorting and displaying. However, they write their data to the system's paging
    space, rather than to memory (internal tables use memory). For this reason, field-groups are only appropriate for processing
    large lists (e.g. over 50,000 records). If you have large lists, you should work with the systems administrator to decide the
    maximum amount of RAM your program should use, and from that, calculate how much space your lists will use. Then you can
    decide whether to write the data to memory or swap space. See the Fieldgroups ABAP example.
    Use as many table keys as possible in the WHERE part of your select statements.
    Whenever possible, design the program to access a relatively constant number of records (for instance, if you only access the
    transactions for one month, then there probably will be a reasonable range, like 1200-1800, for the number of transactions
    inputted within that month). Then use a SELECT A B C INTO TABLE ITAB statement.
    Get a good idea of how many records you will be accessing. Log into your productive system, and use SE80 -> Dictionary Objects
    (press Edit), enter the table name you want to see, and press Display. Go To Utilities -> Table Contents to query the table
    contents and see the number of records. This is extremely useful in optimizing a program's memory allocation.
    Try to make the user interface such that the program gradually unfolds more information to the user, rather than giving a huge
    list of information all at once to the user.
    Declare your internal tables using OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to be accessing. If the
    number of records exceeds NUM_RECS, the data will be kept in swap space (not memory).
    Use SELECT A B C INTO TABLE ITAB whenever possible. This will read all of the records into the itab in one operation, rather
    than repeated operations that result from a SELECT A B C INTO ITAB... ENDSELECT statement. Make sure that ITAB is declared
    with OCCURS NUM_RECS, where NUM_RECS is the number of records you expect to access.
    If the number of records you are reading is constantly growing, you may be able to break it into chunks of relatively constant
    size. For instance, if you have to read all records from 1991 to present, you can break it into quarters, and read all records
    one quarter at a time. This will reduce I/O operations. Test extensively with GET RUN TIME when using this method.
    Know how to use the 'collect' command. It can be very efficient.
    Use the SELECT SINGLE command whenever possible.
    Many tables contain totals fields (such as monthly expense totals). Use these avoid wasting resources by calculating a total
    that has already been calculated and stored.
    These r good websites which wil help u :
    Performance tuning
    http://www.sapbrainsonline.com/ARTICLES/TECHNICAL/optimization/optimization.html
    http://www.geocities.com/SiliconValley/Grid/4858/sap/ABAPCode/Optimize.htm
    http://www.abapmaster.com/cgi-bin/SAP-ABAP-performance-tuning.cgi
    http://abapcode.blogspot.com/2007/05/abap-performance-factor.html
    cheers!
    gyanaraj
    ****Pls reward points if u find this helpful

  • Data trasfer techniques

    Hi,
    We have so may data transfer techniques such as BDC, LSMW, BAPI, IDoc.
    What is the each technology for? I mean what is the significance of each technology?

    Hi Sandeep,
    BDC is the old technic used for DATA transfer. SAP Replaced all BDCs with BAPI function modules in the newer versions. SAP doesn't encourage using BDC. Almost for all transactions we have readily available BAPIs. Both will do the same thing
    i.e Updating the data base. BDC and BAPI can be used both from with in SAP or from Non SAP to SAP.
    eg: If we are going to change some order information it will be with in SAP. We will get the order details from SAP database and change the details using BDC or BAPI.
    If we are going to create new order by uploading the data from Flat files then it will be between Non SAP to SAP.
    Another important point is All BAPIs are RFCs. SO u can use
    BAPI from another system also(SAP/NonSAP)
    LSMW is generally used for initial migration of legacy data from Non SAP system to SAP system.
    IDOC is used to send and receive the documents/data From SAP to SAP or Non SAP to SAP or SAP to Non SAP. I heard that it is possible to send the data between 2 Non SAP systems also using IDOCs.
    Thanks,
    Vinod.

  • ERR:10003 Unexpected data store file exists for new data store

    Our TimesTen application crashes and then it can not connect TimesTen datastore, and then we use ttIsql and get error "10003 Unexpected data store file exists for new data store".So we must rebuild the DataStore.
    I guess the application damages the datastore because we use "direct-linked" mode. Is it true?
    Should I use "Client-Server" mode if our data is very important?
    thx!

    Your question raises several important discussion points:
    It is possible (though very unlikely in practice) for a C or C++ program operating in direct mode to damage the contents of the datastore e.g. by writing through an invalid memory pointer. In the 11+ years that TimesTen has existed as a commercial product we have so far never seen any support case where this was diagnosed as the cause of a problem. However, it is definitely a theoretical possibility and rigorous program testing and use of tools such as Purify is strongly recommended when developing in C or C++ in direct mode. Java programs running in direct mode are completely 'safe' unless they invoke non-Java code via JNI when a similar risk is present.
    The reality is that most customers who use TimesTen in very high performance mission critical applications use mainly direct mode...
    Note also that an application crashing should not cause any damage or corruption to a datastore, even if it is using direct mode, as Times%Ten contains explicit mechanisms to guard against this.
    Your specific problem (error 10003) is nothing to do with the datastore being damaged. This error reflects a discrepancy between the instance main daemon's metedata about all the datastores that it is managing and the reality. This error occurs when the main daemon does not know about a datastore and yet when it comes to connect to (and hence create) the datastore it finds that checkpoint or log files already exist. The main daemon's metadata is managed solely by the main daemon and is completely separate from the datastore and datastore files (the default location is <tt_instance_install_directory>/info, though you can change this at install time). The ususal cause of this is that someone has been manually manipulating files within that directory (which of course you should never do) and has removed or renamed the .DBI file corresponding to the datastore.
    This error should never arise under normal circumstances and certainly not just because some application has crashed.
    Rather than simply switching to the (much slower) client/server mode I think we should try and understand why this error is occurring. Could you please post the following:
    1. Output of ttVersion command
    and then we can take it from there.
    Thanks, Chris

  • Adding New Data To Same Page - HELP

    I am trying to put together a invoice on the fly, the products are added to the invoice by selecting the disired product from a drop down menu and hitting the add button. You SHOULD then be able to select more products from the same drop down menu and htting add again will include it in the invoice.... simple?? Well the problem i have is that when i hit the add button to add another item it just replaces the one i have already added.... this is very annoying... i cannot think of a way to take the product data through the form submission ready for adding too.
    Any Ideas this is really bugging me, and the more time i spend on it, the worse its getting. Code Below
    <%@ page buffer="32kb" %>
    <%@ page import="java.sql.*, javax.servlet.ServletException, java.io.IOException, com.stock_control.*" %>
    <%!
    String convertResultsToSelect ( ResultSet rs, String selectName, String idCol, String descCol ) throws SQLException
    StringBuffer sb = new StringBuffer ( "<select name=" + selectName +">" );
    if (rs != null) {
    while (rs.next()) {
              sb.append ( "<option value=" );
    sb.append ( rs.getString(idCol) );
    sb.append ( ">" );
    sb.append ( rs.getString(descCol) );
    sb.append ( "</option>" );
    sb.append ( "</select>" );
    return sb.toString ();
    %>
    <html>
    <head>
    <title>Members Area - Stock Check</title>
    </head>
    <body bgcolor="#CCCCCC" topmargin="0">
    <%@ include file="topConn.jsp" %>
    <%
              String msg = "";
              String name = "";
              String address1 = "";
              String address2 = "";
              String town ="";
              String postcode = "";
              String country = "";
              String phone = "";
              String mail = "";
              String comments = "";
              float total = 0;
              int nextFree = 0;
              String prodResults;
              int numberOfItems = 0;
         String shop = (String)session.getValue("SHOP");
         String selectProd = "SELECT ProductID, ProductName FROM product WHERE ShopID=";
         rs1 = stmt.executeQuery(selectProd+shop);
         prodResults = convertResultsToSelect( rs1, "product","ProductID","ProductName");
         rs = stmt.executeQuery("SELECT COUNT(*) AS stockLevel FROM product WHERE ShopID ="+shop);     
    while(rs.next())
         numberOfItems = rs.getInt("stockLevel");
         Product[] products = new Product[numberOfItems];
    if (request.getParameter("add")!=null)
              // get values from all text boxes....
              name = request.getParameter("name");
              address1 = request.getParameter("address1");
              address2 = request.getParameter("address2");
              town = request.getParameter("town");
              postcode = request.getParameter("postcode");
              country = request.getParameter("country");
              phone = request.getParameter("phone");
              mail = request.getParameter("mail");
              comments = request.getParameter("comments");
                   // add data from dropdown
              String newProduct = request.getParameter("product");
              //connect to database and put product data into array + increment
                   int thenewid = 0;
                   String theName = "";
                   float thePrice = 0;
                   String nono = "0";
              String getProd = "SELECT * FROM product WHERE ProductID=";
              rs = stmt.executeQuery(getProd+newProduct);
              if(rs.next())
                   thenewid = Integer.parseInt(newProduct);
                   theName = rs.getString("ProductName");
                   thePrice = rs.getFloat("SalePrice");
                   Product addProduct = new Product(thenewid, theName, thePrice, nono);
                   products[nextFree] = addProduct; //PROBLEM
                   nextFree++;
    // reload page with new data in form
    //for loop arround array     
    %>
    <form method="GET">
    <div align=center>
    <p>
    <%=prodResults%>
    <input type="submit" value="Add" name="add"></p>
    <TABLE width=100% height="1">
    <TBODY>
    <tr>
    <td width="4%" height="19"> </td>
    <td width="96%" height="19"> 
    </td>
    </tr>
    <tr>
    <td width="4%" height="198"> </td>
    <td width="96%" height="198">
    <div align="center"></div>
    <table border="0" cellpadding="2" style="border-collapse: collapse" bordercolor="#111111" width="100%" id="AutoNumber5">
    <tr>
    <td width="4%"> </td>
    <td width="15%"><font face="Verdana" size="2">Customer Name</font></td>
    <td width="30%"><input type="text" name="name" size="30" value="<%=name%>"></td>
    <td width="19%"><font face="Verdana" size="2">Customer Phone #</font></td>
    <td width="26%"><input name="phone" type="text" size="30" value="<%=phone%>"></td>
    <td width="6%"> </td>
    </tr>
    <tr>
    <td> </td>
    <td><font face="Verdana" size="2">Address Line 1</font></td>
    <td><input type="text" name="address1" size="40" value="<%=address1%>"></td>
    <td><font face="Verdana" size="2">Customer E-mail</font></td>
    <td><input name="mail" type="text" size="30" value="<%=mail%>"></td>
    <td> </td>
    </tr>
    <tr>
    <td height="32"> </td>
    <td><font face="Verdana" size="2">Address Line 2</font></td>
    <td> </td>
    <td><font face="Verdana" size="2">Comments</font></td>
    <td width="26%" rowspan="5"><p>
    <textarea name="comments" cols="29" rows="5"><%=comments%></textarea>
    </p>
    <p>  </p></td>
    <td> </td>
    </tr>
    <tr>
    <td> </td>
    <td><font size="2" face="Verdana">Town/City</font></td>
    <td><input name="town" type="text" size="40" value="<%=town%>">
    <input name="address2" type="text" size="40" value="<%=address2%>"></td>
    <td> </td>
    <td> </td>
    </tr>
    <tr>
    <td> </td>
    <td><font size="2" face="Verdana">Post Code</font></td>
    <td><input type="text" name="postcode" size="10" value="<%=postcode%>"></td>
    <td> </td>
    <td> </td>
    </tr>
    <tr>
    <td> </td>
    <td><font size="2" face="Verdana">Country</font></td>
    <td><input type="text" name="country" size="40" value="<%=country%>"></td>
    <td> </td>
    <td> </td>
    </tr>
    <tr>
    <td height="24"> </td>
    <td> </td>
    <td> </td>
    <td> </td>
    <td> </td>
    </tr>
    <tr>
    <td height="23"> </td>
    <td><strong><font size="2" face="Verdana">Product Code</font></strong></td>
    <td><strong><font size="2" face="Verdana">Name</font></strong></td>
    <td><strong><font size="2" face="Verdana">Price</font></strong></td>
    <td> </td>
    <td> </td>
    </tr>
    <%
    if (request.getParameter("add")!=null)
    for(int i=0; i<nextFree; i++)
    Product temp = products;
    int prodID = temp.getid();
    String prodName = temp.getname();
    float prodPrice = temp.getprice();
    total = total + prodPrice;
    // work out total on the fly
    %>
    <tr>
    <td height="23"> </td>
    <td> <font size="2" face="Verdana, Arial, Helvetica, sans-serif"><%=prodID%> </font></td>
    <td><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><%=prodName%></font></td>
    <td><font size="2" face="Verdana, Arial, Helvetica, sans-serif">&pound;<%=prodPrice%></font></td>
    <td><font size="2" face="Verdana, Arial, Helvetica, sans-serif"> </font></td>
    <td> </td>
    </tr>
    <%
    %>
    <tr>
    <td width="4%" height="23"> </td>
    <td width="15%"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"> </font></td>
    <td width="30%"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"> </font></td>
    <td width="19%"><strong><font size="2" face="Verdana, Arial, Helvetica, sans-serif">Total:</font></strong></td>
    <td width="26%"><font size="2" face="Verdana, Arial, Helvetica, sans-serif"><%=total%></font></td>
    <td width="6%"> </td>
    </tr>
    </table>
    <p align="center">
    <input type="submit" value="Continue" name="continue">
              </form>
    </p>
    </td> </tr> </TBODY></table>
    <%@ include file="bottomConn.jsp" %>
    </body>
    </html>

    anyone?

  • 'Get All New Data Request by Request' option not working Between DSO n Cube

    Hi BI's..
             Could anyone please tell me why the option ' Get one Request only' and  'Get All New Data Request by Request' is not working in DTP between Standard DSO and InfoCube.
    Scenario:
    I have done the data load by Yearwise say FY 2000 to FY 2009 in Infopackage and load it to Write-optimised DSO (10 requests) and again load Request by request to Standard DSO and activate each request. I have selected the option in DTP's to  'Get All New Data Request by Request' and its working fine between WDSO and SDSO. But not working between Cube and SDSO. While Execute DTP its taking as a single request from SDSO to Cube.( 10 request to single request).
    Regards,
    Sari.

    Hi,
    How does your DTP setting looks like from below options ? It should be change log, assuming you are not deleting change log data.
    Delta Init. Extraction from...
    - Active Table (with archive)
    - Active Table (without archive)
    - Archive ( full extraction only)
    - Change Log
    Also if you want to enable deltas, please do not delete change log. That could create issue while further update from DSO.
    Hope that helps.
    Regards
    Mr Kapadia
    *Assigning points is the way to say thanks*

  • Data in New Data in ODS

    Hello,
    I am trying to load data from r/3 into the ODS.
    When i check in the monitor its saying that
    07:37:12 ( 549741 from 549741 Records )
    But its not turning green.Its still yellow.
    I checked the new data in the ODS. The number of entries in it is 0.

    Hi
    You can view the data in ODS only after the status is green.
    Check if it is showing any error in the description.
    If not then wait for some time. we can't do anything as is it showing yellow(in process) status
    Cheers
    SM

  • Refresh jTable after inserting new data into the Database

    Hey all,
    I'm using Netbeans 6.5 to create a Desktop Application which is connected to a Java DB (Derby).
    The first simple steps were all very successfull:
    Create the jTable and bind it to the Database => everything works fine. When the application starts it correctly shows all data from the database.
    The problem starts when I try to insert new data to the database.
    For that reason I've created textfields and a button "Save". When I press the button it successfully inserts the data to the database but they are not displayed in the jTable (when the application starts they are all there, they are not updated at runtime) . I've tried table.invalidate() and table.repaint() but they just don't work.
    Any help will be GREATLY appreciated. But please have in mind that most of the code is Netbeans-generated and most of it not editable.
    Many thanks in advance.
    George

    Once again you are right my friend. I jumped to conclusion way too fast, when I shouldn't. (Give me a break, I've been busting my head with this well over a week). The response I saw when I did that was that indeed a line is added to the jTable. Because I falsly set the index of the object to be added to be second to last the row appeared on the table, what I didn't see at the time was that the last one disappeared. Hmm...
    A new adventure begins...
    So after a few hours of messing around with it here are my observations:
    1) It was not an observable list. When I add the new element with employeesList.add(newEmp); , the table gets notified but a get a bunch of exceptions:
    xception in thread "AWT-EventQueue-0" java.lang.IndexOutOfBoundsException: Index: 84, Size: 84
            at java.util.ArrayList.RangeCheck(ArrayList.java:546)
            at java.util.ArrayList.get(ArrayList.java:321)
            at org.jdesktop.swingbinding.impl.ListBindingManager$ColumnDescriptionManager.validateBinding(ListBindingManager.java:191)
            at org.jdesktop.swingbinding.impl.ListBindingManager.valueAt(ListBindingManager.java:99)
            at org.jdesktop.swingbinding.JTableBinding$BindingTableModel.getValueAt(JTableBinding.java:713)
            at javax.swing.JTable.getValueAt(JTable.java:1903)
            at javax.swing.JTable.prepareRenderer(JTable.java:3911)
            at javax.swing.plaf.basic.BasicTableUI.paintCell(BasicTableUI.java:2072)
            at javax.swing.plaf.basic.BasicTableUI.paintCells(BasicTableUI.java:1974)
            at javax.swing.plaf.basic.BasicTableUI.paint(BasicTableUI.java:1897)
            at javax.swing.plaf.ComponentUI.update(ComponentUI.java:154)
            at javax.swing.JComponent.paintComponent(JComponent.java:743)
            at javax.swing.JComponent.paint(JComponent.java:1006)
            at javax.swing.JViewport.blitDoubleBuffered(JViewport.java:1602)
            at javax.swing.JViewport.windowBlitPaint(JViewport.java:1568)
            at javax.swing.JViewport.setViewPosition(JViewport.java:1098)
            at javax.swing.plaf.basic.BasicScrollPaneUI$Handler.vsbStateChanged(BasicScrollPaneUI.java:818)
            at javax.swing.plaf.basic.BasicScrollPaneUI$Handler.stateChanged(BasicScrollPaneUI.java:807)
            at javax.swing.DefaultBoundedRangeModel.fireStateChanged(DefaultBoundedRangeModel.java:348)
            at javax.swing.DefaultBoundedRangeModel.setRangeProperties(DefaultBoundedRangeModel.java:285)
            at javax.swing.DefaultBoundedRangeModel.setValue(DefaultBoundedRangeModel.java:151)
            at javax.swing.JScrollBar.setValue(JScrollBar.java:441)
            at javax.swing.plaf.basic.BasicScrollBarUI.scrollByUnits(BasicScrollBarUI.java:907)
            at javax.swing.plaf.basic.BasicScrollPaneUI$Handler.mouseWheelMoved(BasicScrollPaneUI.java:778)
            at javax.swing.plaf.basic.BasicScrollPaneUI$MouseWheelHandler.mouseWheelMoved(BasicScrollPaneUI.java:449)
            at apple.laf.CUIAquaScrollPane$XYMouseWheelHandler.mouseWheelMoved(CUIAquaScrollPane.java:38)
            at java.awt.Component.processMouseWheelEvent(Component.java:5690)
            at java.awt.Component.processEvent(Component.java:5374)
            at java.awt.Container.processEvent(Container.java:2010)
            at java.awt.Component.dispatchEventImpl(Component.java:4068)
            at java.awt.Container.dispatchEventImpl(Container.java:2068)
            at java.awt.Component.dispatchMouseWheelToAncestor(Component.java:4211)
            at java.awt.Component.dispatchEventImpl(Component.java:3955)
            at java.awt.Container.dispatchEventImpl(Container.java:2068)
            at java.awt.Component.dispatchEvent(Component.java:3903)
            at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4256)
            at java.awt.LightweightDispatcher.processMouseEvent(Container.java:3965)
            at java.awt.LightweightDispatcher.dispatchEvent(Container.java:3866)
            at java.awt.Container.dispatchEventImpl(Container.java:2054)
            at java.awt.Window.dispatchEventImpl(Window.java:1801)
            at java.awt.Component.dispatchEvent(Component.java:3903)
            at java.awt.EventQueue.dispatchEvent(EventQueue.java:463)
            at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:269)
            at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:190)
            at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:184)
            at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:176)
            at java.awt.EventDispatchThread.run(EventDispatchThread.java:110)
    Exception in thread "AWT-EventQueue-0" java.lang.IndexOutOfBoundsException: Index: 84, Size: 84
            at java.util.ArrayList.RangeCheck(ArrayList.java:546)
            at java.util.ArrayList.get(ArrayList.java:321)
            at org.jdesktop.swingbinding.impl.ListBindingManager$ColumnDescriptionManager.validateBinding(ListBindingManager.java:191)
            at org.jdesktop.swingbinding.impl.ListBindingManager.valueAt(ListBindingManager.java:99)
            at org.jdesktop.swingbinding.JTableBinding$BindingTableModel.getValueAt(JTableBinding.java:713)
            at javax.swing.JTable.getValueAt(JTable.java:1903)
            at javax.swing.JTable.prepareRenderer(JTable.java:3911)
            at javax.swing.plaf.basic.BasicTableUI.paintCell(BasicTableUI.java:2072)
    ... and a lot morewhich from my poor understanding means that the jTable succesfully notices the change but it is not able (??) to adjust to the new change. What is more interesting is that when I plainly add the element to the end of the list (without an idex that is), a blank row appears at the end of my Table. The weird thing is that I've bound the table to some text fields below it, and when I select that empty row all the data appear correctly to the text fields.
    I tried going through:
                    org.jdesktop.observablecollections.ObservableCollections.observableList(employeesList).add(newEmp);as well as
                    help = org.jdesktop.observablecollections.ObservableCollections.observableListHelper(employeesList);
                    help.getObservableList().add(newEmp);
                    help.fireElementChanged(employeesList.lastIndexOf(newEmp));and
                    obsemployeesList = org.jdesktop.observablecollections.ObservableCollections.observableList(employeesList);
                    obsemployeesList.add(newEmp);and I still get the same results (both the exeptions and the mysterious empty row at the end of the table
    So, I'm again in terrible need of your advice. I can't thank you enough for the effort you put into this.
    Best regards,
    George
    Edited by: tougeo on May 30, 2009 11:06 AM
    Edited by: tougeo on May 30, 2009 11:21 AM
    Edited by: tougeo on May 30, 2009 11:30 AM

  • Why in SE16 we can not  see New Data Table for standard DSO

    Hi,
    We says that there is three tables (New Data Table, Active Data Table and Change Log Table) of Standard DSO, Then Why in SE16 we can not  see New Data Table of Standard DSO.
    Regards,
    Sushant

    Hi Sushant,
    It is possible to see the 3 DSO tables data in through SE16. May be you do not have authorization to see data through SE16.
    Sankar Kumar

  • New Data in R/3 Enterprise - ABAP Proxies - XI what happen?

    Hi,
    i have a theoretical question:
    if i use R/3 Enterprise on WAS and put new Data in the R/3 System and transport them to XI with ABAP Proxies. What happens in the systems?
    How do the R/3 System put new data in the proxy runtime and send them to xi?
    I want to unterstand how the transport from new data in a R/3 System with ABAP Proxies comes to the XI Integration Server.

    Hi Marcel,
    >>>>>How do the R/3 System put new data in the proxy runtime and send them to xi?
    all you need to do in r3 is to fill tables of a generated structure
    and execute one method of a generated class (send....)
    then R3 will connect over HTTP to XI and send the data from your structures
    structures and class in R3 is being generated automatically via TCODE SPROXY
    Regards,
    michal
    <a href="/people/michal.krawczyk2/blog/2005/06/28/xipi-faq-frequently-asked-questions"><b>XI / PI FAQ - Frequently Asked Questions</b></a>

Maybe you are looking for

  • How i can take a field of report for use it in query?

    i hve a table where bills of customer are ready i simply take it on report by select statement.all bills are ready for print.now problem is that i need history of bills for every customer at it's own bill.when i give self join for history, it also ad

  • How do I run multiple channels in the same VI?

    I am trying to analyze two signals, one temp, the other strain.  I have configured my two tasks using MAX, and written separate VIs which work fine when run independently.  Now I want to measure the two signals and output the data on two separate gra

  • Song Title Tags Messed Up

    I should note first that I have my music organized manually into folders for different genres (classic rock, rap, etc.) I have also deselected the "Keep itunes music folder organized" and "Copy files into itunes music folder when adding to library" o

  • Quicktime issue? Converter program not working?

    So I have a ripper program and an iPod converter program that I've been using to import a TVshow (Avatar: The Last Airbender) into iTunes. I did the first two seasons about a year ago, but stopped because I didn't have enough space and because the th

  • Why do my adapters keep dying?

    I don't even move them around. I got this one in August and it's already acting wonky. I can't afford to keep replacing them. Sometimes it will charge with the light on it, sometimes the light will be on but it isn't charging and sometimes the light