Recording Sizes

I have looked everywhere for help with my questions, but
everything I have read shows that the way I have recorded my
project is correct. I have recorded my project at a desktop
resolution of 1024x768 with a recording size of 800x600, which
looks great on my 1024*768 desktop resolution. The users of this
application are mostly on computers with desktop resolutions of
800x600, so I thought I had my basis covered since I set the
recording size to 800x600, but I was sadly mistaken. The users will
have to do a lot of scrolling to see the entire screen. So here is
my first question: What is the best settings to use when recording
for users whose desktop resolutions are 800x600?
Next, I tried the resizing of the project and found that
580x435 recording size gets my entire movie in the screen for those
with a desktop resolution of 800x600. But some of the users of this
application will have settings of 1024x768. The project now looks
way too small. So here's my second question: Is there a way to
record the project so that it looks good to both groups of users?
Another problem I have come across, however, is that if I try
to record the project in 580x435, then the application I am trying
to capture (which was built as 800x600) will not fit in the
recording size.
Please help. I have created this entire project that is set
to go live tomorrow and thought it was good to go until this.
Thanks so much for any guidance you can provide.

This appears to be a repeat of a post that appeared
yesterday. Please direct any replies to where the question first
appeared. You may do this by
clicking
this link.

Similar Messages

  • External Table - possible bug related to record size and total bytes in fil

    I have an External Table defined with fixed record size, using Oracle 10.2.0.2.0 on HP/UX. At 279 byte records (1 or more fields, doesn't seem to matter), it can read almost 5M bytes in the file (17,421 records to be exact). At 280 byte records, it can not, but blows up with "partial record at end of file" - which is nonsense. It can read up to 3744 records, just below 1,048,320 bytes (1M bytes). 1 record over that, it blows up.
    Now, If I add READSIZE and set it to 1.5M, then it works. I found this extends further, for instance 280 recsize with READSIZE 1.5M will work for a while but blows up on 39M bytes in the file (I didn't bother figuring exactly where it stops working in this case). Increasing READSIZE to 5M works again, for 78M bytes in file. But change the definition to have 560 byte records and it blows up. Decrease the file size to 39M bytes and it still won't work with 560 byte records.
    Anyone have any explanation for this behavior? The docs say READSIZE is the read buffer, but only mentions that it is important to the largest record that can be processed - mine are only 280/560 bytes. My table definition is practically taken right out of the example in the docs for fixed length records (change the fields, sizes, names and it is identical - all clauses the same).
    We are going to be using these external tables a lot, and need them to be reliable, so increasing READSIZE to the largest value I can doesn't make me comfortable, since I can't be sure in production how large an input file may become.
    Should I report this as a bug to Oracle, or am I missing something?
    Thanks,
    Bob

    I have an External Table defined with fixed record size, using Oracle 10.2.0.2.0 on HP/UX. At 279 byte records (1 or more fields, doesn't seem to matter), it can read almost 5M bytes in the file (17,421 records to be exact). At 280 byte records, it can not, but blows up with "partial record at end of file" - which is nonsense. It can read up to 3744 records, just below 1,048,320 bytes (1M bytes). 1 record over that, it blows up.
    Now, If I add READSIZE and set it to 1.5M, then it works. I found this extends further, for instance 280 recsize with READSIZE 1.5M will work for a while but blows up on 39M bytes in the file (I didn't bother figuring exactly where it stops working in this case). Increasing READSIZE to 5M works again, for 78M bytes in file. But change the definition to have 560 byte records and it blows up. Decrease the file size to 39M bytes and it still won't work with 560 byte records.
    Anyone have any explanation for this behavior? The docs say READSIZE is the read buffer, but only mentions that it is important to the largest record that can be processed - mine are only 280/560 bytes. My table definition is practically taken right out of the example in the docs for fixed length records (change the fields, sizes, names and it is identical - all clauses the same).
    We are going to be using these external tables a lot, and need them to be reliable, so increasing READSIZE to the largest value I can doesn't make me comfortable, since I can't be sure in production how large an input file may become.
    Should I report this as a bug to Oracle, or am I missing something?
    Thanks,
    Bob

  • How Can I obtain the tables of one schema and the record size???

    How Can I obtain the tables of one schema and the record size???
    Example:
    TableName Record Size
    Tabla1 12500
    Tabla2 7800
    Tabla3 2046

    This is not an OWB question, but you can obtain bda-type information on tables by using the system view dba_tables.
    Regards:
    Igor

  • Trail record size & check frecuency for end of uncommitted transactions

    Hi, everyone,
    Does anyone know if the trail record size can be changed from its default value of 4K?
    What about how long the data pump process delays before searching for more data to proccess in its source trail while it waits for the end of a uncommitted transaction (which is 1 second)?

    Thank you for your answer, MikeN
    The delay I'm referring to cannot be set with EofDelay or EofDelayCSecs: these parameters establish how much time the Data Pump process sleeps when it has nothing new in its source trail. The delay that bothers me seems to happen when the Data Pump has nothing new in its source trail but it is in the middle of an open transaction processing.
    I think is better explained with an example (which I think explains the goal of changing the record size too):
    This is an excerpt for the Extract process trace:
    *09:51:59.653622 write(20, "G\1\0\307H\0\0<E\4\0A\0F\5\377\2\361\361\317\21\232\265\300\0\0\0\0\0)h\20"..., 4096) = 4096* <----- Extract writes down the first 4K record of the transaction
    09:51:59.653690 time(NULL) = 1349769119
    09:51:59.653726 time(NULL) = 1349769119
    09:51:59.653763 time(NULL) = 1349769119
    09:51:59.653803 time(NULL) = 1349769119
    09:51:59.653838 time(NULL) = 1349769119
    09:51:59.653877 time(NULL) = 1349769119
    09:51:59.653913 time(NULL) = 1349769119
    09:51:59.653948 time(NULL) = 1349769119
    09:51:59.653987 time(NULL) = 1349769119
    09:51:59.654024 time(NULL) = 1349769119
    09:51:59.654058 time(NULL) = 1349769119
    09:51:59.654097 time(NULL) = 1349769119
    09:51:59.654140 time(NULL) = 1349769119
    09:51:59.654174 gettimeofday({1349769119, 654182}, NULL) = 0
    09:51:59.654207 clock_gettime(CLOCK_REALTIME, {1349769119, 654216293}) = 0
    09:51:59.654234 futex(0x9b62584, FUTEX_WAIT_PRIVATE, 957, {0, 999965707}) = 0
    09:51:59.751502 futex(0x9b62568, FUTEX_WAKE_PRIVATE, 1) = 0
    09:51:59.751554 llseek(19, 2722304, [2722304], SEEKSET) = 0
    09:51:59.751608 futex(0x9b62534, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x9b62530, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
    09:51:59.751682 nanosleep({0, 0}, NULL) = 0
    *09:52:00.162689 write(20, "\0D\0\0O\0\0\0\30\0\0\0\0240000100050134977631"..., 2374) = 2374* <----- Extract writes down the remaining data for the transaction
    And this is an excerpt of the corresponding Data Pump process trace:
    09:51:59.653398 read(11, "F\0\4/0\0\1\3210\0\0\10GG\r\nTL\n\r1\0\0\2\0\0032\0\0\4 \0"..., 1048576) = 7604
    09:51:59.653472 stat64("/stella_dat/ggate/tlstella/tl000195", 0xbfca2a0c) = -1 ENOENT (No such file or directory)
    09:51:59.653543 nanosleep({0, 0}, NULL) = 0
    09:51:59.653651 llseek(11, 0, [0], SEEKSET) = 0
    *09:51:59.653543 nanosleep({0, 0}, NULL) = 0* <---- This is EOFDELAY: it's set to 0
    09:51:59.653651 llseek(11, 0, [0], SEEKSET) = 0
    *09:51:59.653709 read(11, "F\0\4/0\0\1\3210\0\0\10GG\r\nTL\n\r1\0\0\2\0\0032\0\0\4 \0"..., 1048576) = 11700* <----- Data Pump detects a new record in the source trail
    09:51:59.653767 read(11, "", 1048576) = 0
    09:51:59.653840 time(NULL) = 1349769119
    09:51:59.653910 time(NULL) = 1349769119
    09:51:59.653959 time(NULL) = 1349769119
    09:51:59.654014 time(NULL) = 1349769119
    09:51:59.654067 time(NULL) = 1349769119
    09:51:59.654123 time(NULL) = 1349769119
    09:51:59.654181 time(NULL) = 1349769119
    09:51:59.654232 time(NULL) = 1349769119
    09:51:59.654274 time(NULL) = 1349769119
    09:51:59.654312 time(NULL) = 1349769119
    09:51:59.654351 time(NULL) = 1349769119
    09:51:59.654389 time(NULL) = 1349769119
    09:51:59.654428 time(NULL) = 1349769119
    09:51:59.654467 time(NULL) = 1349769119
    09:51:59.654505 time(NULL) = 1349769119
    09:51:59.654543 time(NULL) = 1349769119
    09:51:59.654582 time(NULL) = 1349769119
    09:51:59.654620 time(NULL) = 1349769119
    09:51:59.654657 time(NULL) = 1349769119
    09:51:59.654695 time(NULL) = 1349769119
    09:51:59.654733 time(NULL) = 1349769119
    09:51:59.654771 time(NULL) = 1349769119
    09:51:59.654809 time(NULL) = 1349769119
    09:51:59.654844 read(11, "", 1048576) = 0
    *09:51:59.654881 nanosleep({1, 0}, NULL) = 0* <----- This is the 1 second delay that I want to get rid of
    *09:52:00.655079 read(11, "\0D\0\0O\0\0\0\30\0\0\0\0240000100050134977631"..., 1048576) = 2374* <----- Data Pump reads the second record of the transaction

  • RMS Record Size in Motorola Emulator

    The RMS Record size in Motorola Emulator is only 1 KB...
    But I need to increase it since I am going to store video in RMS for testing.
    So is there any way to change the record size in this Motorola Emulator...
    Please reply...

    if it's working like sun wtk, it could be in the <WTK_FOLDER>/appdb/ ...

  • SQL server record size limitation.

    Hi all,
    I am using SQL server, wxp & jboss to write jsp, I am writting a CMS which need to store a lot of documents and show it on the web.
    However, some of the content are too long and its longer that the max limitation of the record size of sql server. What is the most common solution for solve this problem.
    As the contents are in unicode, I think save it as a text file seems difficult. Any other solution suggest to me??
    Thanks a lot.
    kin

    Thanks, gocha
    but the sql server limited the field size in 4000 (in all unicode data type) but if i want to store something which is greater than 4000 character..... so I would like to find any other solution to solve this problem.
    Thanks alot.

  • Is there a max record size?

    This is probably an easy question for some of the Oracle Gurus out there. :-)
    While working on other database platforms, I've found that there is a maximum size that a single record in the database can be. This maximum size is usually based on what the database platform considers a page of data.
    Does Oracle have the same thing? If so, what is the maximum size and is it platform dependant (i.e. Windows and Unix have different max sizes)?
    The reason for the question is that we are debating wether to make a series of comment fields on the records to be CLOBs or VARCHAR2.
    Thanks for the help
    Shawn Smiley
    Software Architect/DBA
    xwave New England
    http://usa.xwave.com

    This is probably an easy question for some of the Oracle Gurus out there. :-)
    While working on other database platforms, I've found that there is a maximum size that a single record in the database can be.
    This maximum size is usually based on what the database platform considers a page of data. Does Oracle have the same thing?I know in databases such as DB2 this is the case but I don't think it is in Oracle - although I couldn't find any documentation to back this
    up. The reason I don't think this is true in Oracle is because I know that in DB2 a row cannot be larger than the block size. In Oracle you
    can have a row that's larger than the block size. If a row is larger than one block it puts the rest of the data in another block(s). This is
    called row chaining or row migration depending on the situation.
    The reason for the question is that we are debating wether to make a series of comment fields on the records to be CLOBs or VARCHAR2.I would base the decision on the following:
    If the maximum length of a comment will be 4000 characters or less use varchar2 otherwise use CLOB.
    HTH

  • Total record size in mysql

    hai
    have a peace day.
    what is the size of the table in MYSQL
    how many records are store maximum in a table
    thanks
    regards
    rex

    http://www.google.com/search?sourceid=navclient-ff&ie=UTF-8&rls=GGGL,GGGL:2006-12,GGGL:en&q=maximum+table+size+mysql

  • Quicktime screen recording size

    Hi
    So since I knew how to do the screen recording, I have been recording my stuffs to post on YouTube.
    But the problem is that the size of the recorded video is way too high (74MB for 1 minute recording)
    I think it is not compressed and the quality is too high, any compressing programs, any other ways to reduce the size?
    Thanks
    CC9799

  • Code Inspector - Record size limitation ???

    Hello all,
    I am running into a situation where an inspection results in large number of records.
    The user interface warns me that this is the case e.g. >40,000 records found.
    Question:
    1. How can I view all the issues associated with my object selection (assume 500 objects) ?
    That is, if the UI is limiting the display, to, say, the first 300 objects yielding 40,000 issues, how can I see the issues associated with the remaining 200 objects?
    2. Is this just a UI limitation?
    I have searched but found no answer to such questions.  Your help is appreciated.
    Best,  John

    Thank you for the suggestion.  I should have been more detailed in the nature of my problem...
    My object list comes from a reference to a very large transport with hundreds of objects.
    So it is not so straight forward for me to separate by objects.  It would take hand comparison of objects and hand building of object lists through visual strolling of results sorted by object.  EXTREMELY painful.
    Is there a better way? 
    Regards,  John

  • Cannot extract new version without getting error record size = 8 blocks, firefox cannot open file exists, exiting with falure status due to previous errors

    I have firefox 3.6 and have tried several times with newer versions to update....always get errors on extracting and never goes through. What am I doing wrong, just tried version 8 got the errors I listed?

    See:
    * https://support.mozilla.com/en-US/kb/Installing%20Firefox%20on%20Linux
    make sure your pc meets minimum requirement?
    * http://www.mozilla.org/en-US/firefox/8.0/system-requirements/

  • Calculating the size of each record in a table?

    Hi to all,
    I trying to create histograms based on the unique record sizes in a particular table. However , in order to do that I need to calculate the size of each record in the table (in kilobytes preferably) and to get a list of unique record sizes.
    Is there a function or easy way to calculate the size in Kbytes of a record in a table?
    Thank you in advance!
    Tony

    Hi,
    You can use the sum of the average of the vsizes of the fields in the table. For example:TEST.SQL>CREATE TABLE TEST
      2  (
      3  A VARCHAR2(50 CHAR),
      4  B NUMBER
      5  ) TABLESPACE TOOLS;
    Table created.
    TEST.SQL>INSERT INTO TEST VALUES('&1',&2);
    Enter value for 1: A
    Enter value for 2: 10
    old   1: INSERT INTO TEST VALUES('&1',&2)
    new   1: INSERT INTO TEST VALUES('A',10)
    1 row created.
    TEST.SQL>/
    Enter value for 1: ABCDEFGHIJKLMNOPQRSTUVWXYZ
    Enter value for 2: 100000
    old   1: INSERT INTO TEST VALUES('&1',&2)
    new   1: INSERT INTO TEST VALUES('ABCDEFGHIJKLMNOPQRSTUVWXYZ',100000)
    1 row created.
    TEST.SQL>COMMIT;
    Commit complete.
    TEST.SQL>SELECT AVG(VSIZE(A)) + AVG(vSIZE(B)) FROM TEST;
    AVG(VSIZE(A))+AVG(VSIZE(B))
                           15.5                                  The average line size - from the data in the table NOW - is 15.5 bytes.
    HTH,
    Yoann.

  • Berkeley DB needs too many write locks on specific size of the records

    Hi,
    I put records to the Berkeley db row by row in single transaction. And i have discovered significant increase of write locks on some specific size of the data.
    For example when i put 1000 records, where each record data size is around 3500 bytes, transaction uses 428 locks. On bigger or smaller data size transaction needs fewer locks.
    I put statistic in the table:
    Record size Lock number needed by transaction
    ~1400 169
    ~3500 428
    ~4300 6
    I think it is somehow related to the page size(16384) or cache size (64Mb)
    Could someone explain why transaction needs so many write locks with data size ~3500 and fewer locks with data size ~4300?
    Is there any way to avoid that raise of lock number? If not, I need to measure maximum number of locks needed for successful end of transactions. Understanding of the source of that issue would help me to prepare data which require the biggest number of locks for putting it in the database in one transaction.
    Thanks in advance.

    Please delete this post and repost in the appropriate forum. Thank you.

  • Fetching many records all at once is no faster than fetching one at a time

    Hello,
    I am having a problem getting NI-Scope to perform adequately for my application.  I am sorry for the long post, but I have been going around and around with an NI engineer through email and I need some other input.
    I have the following software and equipment:
    LabView 8.5
    NI-Scope 3.4
    PXI-1033 chassis
    PXI-5105 digitizer card
    DELL Latitude D830 notebook computer with 4 GB RAM.
    I tested the transfer speed of my connection to the PXI-1033 chassis using the niScope Stream to Memory Maximum Transfer Rate.vi found here:
    http://zone.ni.com/devzone/cda/epd/p/id/5273.  The result was 101 MB/s.
    I am trying to set up a system whereby I can press the start button and acquire short waveforms which are individually triggered.  I wish to acquire these individually triggered waveforms indefinitely.  Furthermore, I wish to maximize the rate at which the triggers occur.   In the limiting case where I acquire records of one sample, the record size in memory is 512 bytes (Using the formula to calculate 'Allocated Onboard Memory per Record' found in the NI PXI/PCI-5105 Specifications under the heading 'Waveform Specifications' pg. 16.).  The PXI-5105 trigger re-arms in about 2 microseconds (500kHz), so to trigger at that rate indefinetely I would need a transfer speed of at least 256 Mb/s.  So clearly, in this case the limiting factor for increasing the rate I trigger at and still be able to acquire indefinetely is the rate at which I transfer records from memory to my PC.
    To maximize my record transfer rate, I should transfer many records at once using the Multi Fetch VI, as opposed to the theoretically slower method of transferring one at a time.  To compare the rate that I can transfer records using a transfer all at once or one at a time method, I modified the niScope EX Timestamps.vi to allow me to choose between these transfer methods by changing the constant wired to the Fetch Number of Records property node to either -1 or 1 repectively.  I also added a loop that ensures that all records are acquired before I begin the transfer, so that acquisition and trigger rates do not interfere with measuring the record transfer rate.  This modified VI is attached to this post.
    I have the following results for acquiring 10k records.  My measurements are done using the Profile Performance and Memory Tool.
    I am using a 250kHz analog pulse source.
    Fetching 10000 records 1 record at a time the niScope Multi Fetch
    Cluster takes a total time of 1546.9 milliseconds or 155 microseconds
    per record.
    Fetching 10000 records at once the niScope Multi Fetch Cluster takes a
    total time of 1703.1 milliseconds or 170 microseconds per record.
    I have tried this for larger and smaller total number of records, and the transfer time per is always around 170 microseconds per record regardless if I transfer one at a time or all at once.  But with a 100MB/s link and 512 byte record size, the Fetch speed should approach 5 microseconds per record as you increase the number of records fetched at once.
    With this my application will be limited to a trigger rate of 5kHz for running indefinetely, and it should be capable of closer to a 200kHz trigger rate for extended periods of time.  I have a feeling that I am missing something simple or am just confused about how the Fetch functions should work. Please enlighten me.
    Attachments:
    Timestamps.vi ‏73 KB

    Hi ESD
    Your numbers for testing the PXI bandwidth look good.  A value of
    approximately 100MB/s is reasonable when pulling data accross the PXI
    bus continuously in larger chunks.  This may decrease a little when
    working with MXI in comparison to using an embedded PXI controller.  I
    expect you were using the streaming example "niScope Stream to Memory
    Maximum Transfer Rate.vi" found here: http://zone.ni.com/devzone/cda/epd/p/id/5273.
    Acquiring multiple triggered records is a little different.  There are
    a few techniques that will help to make sure that you are able to fetch
    your data fast enough to be able to keep up with the acquired data or
    desired reference trigger rate.  You are certainly correct that it is
    more efficient to transfer larger amounts of data at once, instead of
    small amounts of data more frequently as the overhead due to DMA
    transfers becomes significant.
    The trend you saw that fetching less records was more efficient sounded odd.  So I ran your example and tracked down what was causing that trend.  I believe it is actually the for loop that you had in your acquisition loop.  I made a few modifications to the application to display the total fetch time to acquire 10000 records.  The best fetch time is when all records are pulled in at once. I left your code in the application but temporarily disabled the for loop to show the fetch performance. I also added a loop to ramp the fetch number up and graph the fetch times.  I will attach the modified application as well as the fetch results I saw on my system for reference.  When the for loop is enabled the performance was worst at 1 record fetches, The fetch time dipped  around the 500 records/fetch and began to ramp up again as the records/fetch increases to 10000.
    Note I am using the 2D I16 fetch as it is more efficient to keep the data unscaled.  I have also added an option to use immediate triggering - this is just because I was not near my hardware to physically connect a signal so I used the trigger holdoff property to simulate a given trigger rate.
    Hope this helps.  I was working in LabVIEW 8.5, if you are working with an earlier version let me know.
    Message Edited by Jennifer O on 04-12-2008 09:30 PM
    Attachments:
    RecordFetchingTest.vi ‏143 KB
    FetchTrend.JPG ‏37 KB

  • Displaying some records at a time

    does anyone know how to display only 10 records at one time from the database followed by the next button to display next 10 etc?

    can anyone check this:
    what does this mean?
    String sql_2=request.getParameter("sql_2");
    anything missing?
    <%@ page language="Java" import = "java.sql.*" %>
    <jsp:useBean id="myBean" scope ="session" class="bean.DataConnectionBean" />
    <html>
    <body bgcolor="#CCFFFF">
    <%
    String sql; //SQL string
    int rowPerPage=10; //Record size of one page
    int rowTotal; //Total records
    int pageTotal; //Total pages
    int pageIndex; //Pages waiting for display, from 1
    pageIndex = Integer.parseInt( request.getParameter("page") );
    rowTotal=Integer.parseInt( request.getParameter("rowTotal") );
    String sql_2=request.getParameter("sql_2");
    String sql_1= "select " +
    "vin, " +
    "ga_no, " +
    "ga_build_date, " +
    "deliver_date, " +
    "model, " +
    "year_code, " +
    "engine_no, " +
    "initial_mileage " +
    "from vehiclemaster " +
    "where ";
    sql=sql_1+sql_2;
    myBean.connect();
    ResultSet rs=myBean.executeQuery(sql);
    pageTotal = (rowTotal+rowPerPage-1) / rowPerPage; // Get total pages
    if(pageIndex>pageTotal) pageIndex = pageTotal;
    %>
    <table border="0" cellpadding="2" cellspacing="0">
    <tr>
    <td align="left">
    <div align="left">Total <%=rowTotal%> Records</div>
    </td>
    </tr>
    </table>
    <table border="0" width="844">
    <tr bgcolor="#669966">
    <th>VIN</th>
    <th>GA#</th>
    <th>build date</th>
    <th>deliver date</th>
    <th>model</th>
    <th>model year code</th>
    <th>engine#</th>
    <th>initial mileage</th>
    </tr>
    <%
    int absolute=(pageIndex-1) * rowPerPage;
    for(int k=0; k<absolute; k++) rs.next();
    int i = 0;
    String LineColor;
    while( rs.next() && i<rowPerPage && (absolute+i)<rowTotal ){
    String col1=( ( ( col1=rs.getString(1) ) == null || rs.wasNull() ) ? "" : col1 );
    String col2=( ( ( col2=rs.getString(2) ) == null || rs.wasNull() ) ? "" : col2 );
    String col3=( ( ( col3=rs.getString(3) ) == null || rs.wasNull() ) ? "" : col3 );
    String col4=( ( ( col4=rs.getString(4) ) == null || rs.wasNull() ) ? "" : col4 );
    String col5=( ( ( col5=rs.getString(5) ) == null || rs.wasNull() ) ? "" : col5 );
    String col6=( ( ( col6=rs.getString(6) ) == null || rs.wasNull() ) ? "" : col6 );
    String col7=( ( ( col7=rs.getString(7) ) == null || rs.wasNull() ) ? "" : col7 );
    String col8=( ( ( col8=rs.getString(8) ) == null || rs.wasNull() ) ? "" : col8 );
    if (i % 2 == 0) LineColor="#DBECFD";
    else LineColor="#C6E1FD";
    %>
    <TR bgcolor="<%= LineColor %>">
    <TD><%=col1%></TD>
    <TD><%=col2%></TD>
    <TD><%=col3%></TD>
    <TD><%=col4%></TD>
    <TD><%=col5%></TD>
    <TD><%=col6%></TD>
    <TD><%=col7%></TD>
    <TD><%=col8%></TD>
    </TR>
    <%i++;
    }%>
    </table>
    <div align="center">
    <table border="0" align="left">
    <tr>
    <td align="left">
    <div align="left">Page <%=pageIndex%> / <%=pageTotal%> 
    <%if(pageIndex<pageTotal){%>
    <a href="QueryVehicleResult.jsp?sql_2=<%=sql_2%>&rowTotal=<%=rowTotal %>&page=<%=pageIndex+1%>"><img src="images/next.gif" border="0">
    </a>
    <%}%>
    <%if(pageIndex>1){%>
    <a href="QueryVehicleResult.jsp?sql_2=<%=sql_2%>&rowTotal=<%=rowTotal %>&page=<%=pageIndex-1%>"><img src="images/prev.gif" border="0">
    </a>
    <%}%>
    </div>
    </td>
    </tr>
    </table>
    </div>
    <meta http-equiv="Content-Type" content="text/html; charset=gb2312">
    </body>
    </html>
    <%
    rs.close();
    myBean.close();
    %>

Maybe you are looking for

  • ITunes wont open after upgrade to iTunes 9

    I upgraded to iTunes 9 unaware that apparently it doesn't work with 10.4.11 and now itunes won't even open, instead I get an "requires Quicktime 7.5.5 or later" message and when I click the OK button iTunes still doesn't open. I can't upgrade to QT 7

  • Get my Photos From i Phone to iCloud Back up

    HELP! i backed up my iPhone to icloud then had to get a new iPhone today. I have a video of my family that was backed up but when i got my new phone the photos were there but not the video. So how do i get this from icloud back to my computer or ipho

  • My LinkedIn page is suddenly in extremely small type -- it looks like it would look on my blackberry - How can I fix this?

    My LinkedIn page text and information is extremely small type -- similar to the web page size that you would see on a blackberry before you zoom into the page. The page width is also significantly reduced. Other Firefix pages are normal size. It is j

  • Iphone stolen

         Pleas help me. My iphone (ios 6 or 6.0.1) was stolen and i don't really remember if i activated Find my iPhone (i know, stupid...). My apple id was logged and the cloud was activated (i supose)... because in pictures, on the bottom i had "Albums

  • Sorry: zen micro windows vista driver Proble

    my zen is up-to-date firmware 2.2.02?connect it to windows vista 32 bits and windows says it can not find driver?why does this appens since creative says that zen micro is vista compatible.. thanks