Improving Performance between WRT610N (v1) and WET610N (v1)

Any advice on how to improve performance of the connection between these two Wireless N devices?
WRT610N (v1) and WET610N (v1)
We have a PS3 connected to the WET610N via Ethernet directly, and it connects the PS3 communication wirelessly to the Internet through the WRT610N.  Just wondering if there are some known tweaks that will improve performance between them.  One thing I think is ridiculous is that the WRT610N is showing only 30% signal strength on the WET610N.  That is ridiculous as they are not far from eachother.
Any input is welcome!

Whats the Distance between your Router and the Bridge? Are you connected to 5GHz wireless network or 2.4GHz wireless network?  Whats the Wireless Settings you have setup on your Router.

Similar Messages

  • Am I getting Gigabit transfer between WRT610N router and NMH410 Media Hub?

    Shouldn't the ethernet port LED be flashing blue (gigabit connection) between these 2 connected devices? It's flashing green which, according to the router manual, suggests a 10/100 port. I'm sure I'm missing something!

    If you are refering to the Manual of WRT610N about the Light status, You need to reverse the light status. As per the Linksys manual The LED lights up Green when it is connected to 10/100 port and Blue when it is connected to a gigabit port. But in real terms which you need to revers this, Green light for Gigabit and Blue light for 10/100 port.

  • Comparing performance between TDE encryption and no encryption

    Hi all,
    How can i check, how much database resource (%CPU, Time elapsed) increased when using TDE encryption.
    Thank you!
    Dan.
    Edited by: Dan on Jul 10, 2011 10:13 PM

    The performance implications of using TDE are going to depend on a number of factors including
    - The version of Oracle
    - The hardware available (in particular whether hardware acceleration is available for encryption)
    - Whether you are using tablespace encryption or column-level encryption
    - If you are using column-level encryption how many columns you are encrypting
    - What sort of workload your system is doing.
    - Where your system bottlenecks today without encryption
    Without knowing those things, it's hard to narrow down the answer to somewhere between 0 and 50% which is, obviously, far too large a range to be meaningful.
    On the one hand, the worst case is probably represented by this test case where you're using column-level encryption of one column of a two column table in 10.2 and doing single-row inserts and deletes. Those operations are already heavily CPU bound and, since you're using column-level encryption, the data has to be encrypted and decrypted every time it goes into or out of the SGA. If you were using tablespace-level encryption, the data would only need to be encrypted and decrypted when it is read from or written to the disk which would be far faster in for this test case. Later versions of Oracle also tend to be more efficient.
    On the other hand, if you're using 11.2 with the most recent patches and you've got hardware acceleration, Oracle is happy to trumpet the [urlhttp://www.oracle.com/technetwork/database/options/advanced-security/index-099011.html]near-zero performance impact of TDE.
    Most people live somewhere between these two extremes but it's hard to guess where your particular application falls. I would guess that most people would see something like a 10-15% increase in CPU consumption but that's just a wild guess based on a relatively small sample of systems.
    Justin

  • Performance between SQL Statement and Dynamic SQL

    Select emp_id
    into id_val
    from emp
    where emp_id = 100
    EXECUTE IMMEDIATE
    'Select '|| t_emp_id ||
    'from emp '
    'where emp_id = 100'
    into id_valWill there be more impact in performance while using Dynamic SQL?

    CP wrote:
    Will there be more impact in performance while using Dynamic SQL?All SQLs are parsed and executed as SQL cursors.
    The 2 SQLs (dynamic and static) results in the exact same SQL cursor. So both methods will use an identical cursor. There are therefore no performance differences ito of how fast that SQL cursor will be.
    If an identical SQL cursor is not found (a soft parse), the SQL engine needs to compile the SQL source code supplied, into a SQL cursor (a hard parse).
    Hard parsing burns a lot of CPU cycles. Soft parsing burns less CPU cycles and is therefore better. However, no parsing at all is the best.
    To explain: if the code creates a cursor (e.g. INSERT INTO tab VALUES( :1, :2, :3 ) for inserting data), it can do it as follows:
    while More Data Found loop
      parse INSERT cursor
      bind variables to INSERT cursor
      execute INSERT cursor
      close INSERT cursor
    end loopIf that INSERT cursor does not yet exists, it will be hard parsed and a cursor created. Each subsequent loop iteration will result in a soft parse.
    However, the code will be far more optimal as follows:
    parse INSERT cursor
    while More Data Found loop
      bind variables to INSERT cursor
      execute INSERT cursor
    end loop
    close INSERT cursorWith this approach the cursor is parsed (hard or soft), once only. The cursor handle is then used again and again. And when the application is done inserting data, the cursor handle is released.
    With dynamic SQL in PL/SQL, you cannot really follow the optimal approach - unless you use DBMS_SQL (a complex cursor interface). With static SQL, the PL/SQL's optimiser can kick in and it can optimise its access to the cursors your code create and minimise parsing all together.
    This is however not the only consideration when using dynamic SQL. Dynamic SQL makes coding a lot more complex. The SQL code can now only be checked at execution time and not at development time. There is the issue of creating shareable SQL cursors using bind variables. There is the risk of SQL injection. Etc.
    So dynamic SQL is seldom a good idea. And IMO, the vast majority of people that post problems here relating to dynamic SQL, are using dynamic SQL unnecessary. For no justified and logical reasons. Creating unstable code, insecure code and non-performing code.

  • How to improve performance of photo albums and navigation?

    I just uploaded the latest version of my new website (www.raydunakin.com) last night. I've made a lot of changes in an effort to make it load faster and work more smoothly. There is some improvement but on my Mac, with dialup, there are still some issues.
    One big issue is photo albums. I don't understand why they load so slowly, when it's only loading thumbnail images. The thumbnails are too small to account for the excessive load time. Often there are several that don't load up all until sometime after I click the browser's Stop button.
    Which is another issue: Why does it keep loading stuff after I've clicked Stop?
    I've stripped my albums down to just 15 images per album, so the number of images shouldn't be a problem.
    BTW, I gave up on using the "My Albums" template, which was impossibly slow. Instead I just have a page with a list of links to each album.
    I'm wondering if having everything all in one site might be causing some of the speed problems. Would I be better off moving my photo album pages to a separate site and just link to them from the main site?
    One more question: Many of my hypertext links are not displaying correctly. They are supposed to all be underlined and in a different color from the plain text. Some of them do show up this way, but some look like plain text until you move the cursor over them. (And yes, I have used the Inspector to set the format for all the links.)
    Also, if anyone would like to browse my site and suggest other ways to improve or streamline it, I would appreciate it.
    Thanks in advance for any help or comments you can provide.
    I'm running OS X 10.4.10, Safari 2.0.4, and iWeb 2.0.3

    Ray Dunakin wrote:
    Hi Ray,
    One big issue is photo albums. I don't understand why they load so slowly, when it's only loading thumbnail images. The thumbnails are too small to account for the excessive load time.
    To me it's not taking all that much time... It's quite normal if you're publishing to .Mac... The .Mac server is quite slow, that's a known issue... Partially it may also be due to the template used but I don't think so... There aren't a lot of graphics on that one...
    Often there are several that don't load up all until sometime after I click the browser's Stop button.
    Which is another issue: Why does it keep loading stuff after I've clicked Stop?
    I didn't came across any that didn't load and when I hit stop it stops.
    I'm wondering if having everything all in one site might be causing some of the speed problems. Would I be better off moving my photo album pages to a separate site and just link to them from the main site?
    No that's not the issue. The site size doesn't matter as when you look at a page the browser only retrieves the information about that very page and doesn't even see all the other pages as it follows the links to retrieve the parts of that very page and that's it. What matters is the page size but it isn't heavy at all in your case...
    One more question: Many of my hypertext links are not displaying correctly. They are supposed to all be underlined and in a different color from the plain text. Some of them do show up this way, but some look like plain text until you move the cursor over them. (And yes, I have used the Inspector to set the format for all the links.)
    You may try a "Publish all" from the File menu for that... Or empty the cache of your browser...
    Also, if anyone would like to browse my site and suggest other ways to improve or streamline it, I would appreciate it.
    I like it it's very clean and easy to navigate.
    One other problem with the photo albums that I forgot to mention... When clicking on a thumbnail to view the larger image, while the image loads it looks like nothing is happening. There's no evidence that the image is loading or anything. So by the time the larger image appears, I've already attempted to move on. Is there anything that can be done about this?
    It takes a second to load but in the end it does load... To me it's the server's slowness even if not publishing to .mac...
    Regards,
    Cédric

  • Is there any way of improving compatibility between ipad 4 and windows xp                                                     p

    is there any way of improving compatibility beween ipad and windows xp

    Hi Smokey0422,
    Although iTunes works with Windows XP, iCloud is not supported - so that does limit the functionality somewhat.
    See the resources for your iPad at the following site:
    http://www.apple.com/support/ipad/
    See the setup instructions below:
    http://www.apple.com/icloud/setup/pc.html
    Cheers,
    Judy

  • Different performance between flash player and Adobe Air on MAC

    Hi all,
    I develop a html-based AIR application that embeds a SWF. The SWF runs very slow and choppy. The same SWF loaded directly in flash player runs ok.
    This happens only on MAC. (all Adobe AIR SDK versions do the same)
    Do you have any clue?
    Thank you
    Bye

    Hi,
    Could you provide the following infomation?
    1. What's the version of your Mac?
    3. Do you mind sending your AIR app to zjian at adobe.com for investigation?
    Thanks,
    Jian
    AIR Engineering

  • Are there any applications that help improve compatibility between Microsoft Powerpoint and OS X Mavericks versión?

    when I want to view the presentation in full screen the program crashes.
    -PowerPoint 2011.versión14.0(100825)
    -MacBook Air
    -OS X Mavericks(10.9.2)

    You will need to contact Microsoft for Mac Support  and/or post in their forums.

  • Difference in WS performance between Search and Retrieve operations?

    All,
    We are currently working on a new repository and planning to use MDM webservices on top of that repository for searching and retrieving the data.
    Now I'm curious about the difference in performance between the Search and the Retrieve operations and also within the Retrieve operation, between the different identification methods (internal ID, auto ID, remote key, unique field and display field).
    Because in the webservices guide is stated that the identification methods are listed in order of best performance, but what are these performance differences between these methods (e.g. a retrieve on internal ID is x times faster than a retrieve on remote key which on his turn is x times faster than a retrieve on display fields which on his turn is x times faster than a search operation on same display field).
    Of course the performance depends on lot of other things as well, but I just want to get a feeling on the performance related to eachother (keeping all other variables that can influence the performance the same!)!
    I hope that any of you has experiences with all possibilities and can share performance measurements between the different operations related to eachother.  Thanks in advance.
    Regards,
    Marcel Herber

    Hi,
    Did you implment Webservices in your site.
    We are also having a similar scenarion where we have to serach a Records in MDM from SAP PI based on the certain criteria. I am concerned about the SAP MDM performance , since we are having heavy amount data being loaded every 30 minutes.
    Please let me know the performace aspects of using Webservices.
    Thanks
    Ganesh Kotti

  • SG500 Slow Performance Between Vlans

    Hello,
    I am having an issue with slow performance between vlan 1 and vlan 10, I have IPv4 routing enabled and I have SVI on vlan 1 and vlan 10 respectfully. Within the same vlan the speed is great. Would it also be a problem with using vlan 1 in production for something like this? Normally I stay away from Vlan 1. 
    Thanks

    Hi Alexandery,
    In my opinion, this thread is related to ASP.NET forum. So please post thread on that forum for more effective response. Thank you for understanding. Please refer to the following link.
    http://forums.asp.net/.
    Regards,
    We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. <br/> Click
    <a href="http://support.microsoft.com/common/survey.aspx?showpage=1&scid=sw%3Ben%3B3559&theme=tech"> HERE</a> to participate the survey.

  • A64 Tweaker and Improving Performance

    I noticed a little utility called "A64 Tweaker" being mentioned in an increasing number of posts, so I decided to track down a copy and try it out...basically, it's a memory tweaking tool, and it actually is possible to get a decent (though not earth-shattering by any means) performance boost with it.  It also lacks any real documentation as far as I can find, so I decided to make a guide type thing to help out users who would otherwise just not bother with it.
    Anyways, first things first, you can get a copy of A64 Tweaker here:  http://www.akiba-pc.com/download.php?view.40
    Now that that's out of the way, I'll walk through all of the important settings, minus Tcl, Tras, Trcd, and Trp, as these are the typical RAM settings that everyone is always referring to when they go "CL2.5-3-3-7", so information on them is widely available, and everyone knows that for these settings, lower always = better.  Note that for each setting, I will list the measured cange in my SiSoft Sandra memory bandwidth score over the default setting.  If a setting produces a change of < 10 MB/sec, its effects will be listed as "negligible" (though note that it still adds up, and a setting that has a negligible impact on throughput may still have an important impact on memory latency, which is just as important).  As for the rest of the settings (I'll do the important things on the left hand side first, then the things on the right hand side...the things at the bottom are HTT settings that I'm not going to muck with):
    Tref - I found this setting to have the largest impact on performance out of all the available settings.  In a nutshell, this setting controls how your RAM refreshes are timed...basically, RAM can be thought of as a vast series of leaky buckets (except in the case of RAM, the buckets hold electrons and not water), where a bucket filled beyond a certain point registers as a '1' while a bucket with less than that registers as a '0', so in order for a '1' bucket to stay a '1', it must be periodically refilled (i.e. "refreshed").  The way I understand this setting, the frequency (100 MHz, 133 MHz, etc.) controls how often the refreshes happen, while the time parameter (3.9 microsecs, 1.95 microsecs, etc.) controls how long the refresh cycle lasts (i.e. how long new electrons are pumped into the buckets).  This is important because while the RAM is being refreshed, other requests must wait.  Therefore, intuitively it would seem that what we want are short, infrequent refreshes (the 100 MHz, 1.95 microsec option).  Experimentation almost confirms this, as my sweet spot was 133 MHz, 1.95 microsecs...I don't know why I had better performance with this setting, but I did.  Benchmark change from default setting of 166 MHz, 3.9 microsecs: + 50 MB/sec
    Trfc - This setting offered the next largest improvement...I'm not sure exactly what this setting controls, but it is doubtless similar to the above setting.  Again, lower would seem to be better, but although I was stable down to '12' for the setting, the sweet spot here for my RAM was '18'.  Selecting '10' caused a spontaneous reboot.  Benchmark change from the default setting of 24:  +50 MB/sec
    Trtw - This setting specifies how long the system must wait after it reads a value before it tries to overwrite the value.  This is necessary due to various technical aspects related to the fact that we run superscalar, multiple-issues CPU's that I don't feel like getting into, but basically, smaller numbers are better here.  I was stable at '2', selecting '1' resulted in a spontaneou reboot.  Benchmark change from default setting of 4:  +10 MB/sec
    Twr - This specifies how much delay is applied after a write occurs before the new information can be accessed.  Again, lower is better.  I could run as low as 2, but didn't see a huge change in benchmark scores as a result.  It is also not too likely that this setting affects memory latency in an appreciable way.  Benchmark change from default setting of 3:  negligible
    Trrd - This controls the delay between a row address strobe (RAS) and a seccond row address strobe.  Basically, think of memory as a two-dimensional grid...to access a location in a grid, you need both a row and column number.  The way memory accesses work is that the system first asserts the column that is wants (the column address strobe, or CAS), and then asserts the row that it wants (row address strobe).  Because of a number of factors (prefetching, block addressing, the way data gets laid out in memory), the system will often access multiple rows from the same column at once to improve performance (so you get one CAS, followed by several RAS strobes).  I was able to run stably with a setting of 1 for this value, although I didn't get an appreciable increase in throughput.  It is likely however that this setting has a significant impact on latency.  Benchmark change from default setting of 2:  negligible
    Trc - I'm not completely sure what this setting controls, although I found it had very little impact on my benchmark score regardless of what values I specified.  I would assume that lower is better, and I was stable down to 8 (lower than this caused a spontaneous reboot), and I was also stable at the max possible setting.  It is possible that this setting has an effect on memory latency even though it doesn't seem to impact throughput.  Benchmark change from default setting of 12:  negligible
    Dynamic Idle Cycle Counter - I'm not sure what this is, and although it sounds like a good thing, I actually post a better score when running with it disabled.  No impact on stability either way.  Benchmark change from default setting of enabled:  +10 MB/sec
    Idle Cycle Limit - Again, not sure exactly what this is, but testing showed that both extremely high and extremely low settings degrade performance by about 20 MB/sec.  Values in the middle offer the best performance.  I settled on 32 clks as my optimal setting, although the difference was fairly minimal over the default setting.  This setting had no impact on stability.  Benchmark change from default setting of 16 clks:  negligible
    Read Preamble - As I understand it, this is basically how much of a "grace period" is given to the RAM when a read is asserted before the results are expected.  As such, lower values should offer better performance.  I was stable down to 3.5 ns, lower than that and I would get freezes/crashes.  This did not change my benchmark scores much, though in theory it should have a significant impact on latency.  Benchmark change from default setting of 6.0 ns:  negligible
    Read Write Queue Bypass - Not sure what it does, although there are slight performance increases as the value gets higher.  I was stable at 16x, though the change over the 8x default was small.  It is possible, though I think unlikely, that this improves latency as well.  Benchmark change from default setting of 8x:  negligible
    Bypass Max - Again not sure what this does, but as with the above setting, higher values perform slightly better.  Again I feel that it is possible, though not likely, that this improves latency as well.  I was stable at the max of 7x.  Benchmark change from the default setting of 4x:  negligible
    Asynch latency - A complete mystery.  Trying to run *any* setting other than default results in a spontaneous reboot for me.  No idea how it affects anything, though presumably lower would be better, if you can select lower values without crashing.
    ...and there you have it.  With the tweaks mentioned above, I was able to gain +160 MB/sec on my Sandra score, +50 on my PCMark score, and +400 on my 3dMark 2001 SE score.  Like I said, not earth-shattering, but a solid performance boost, and it's free to boot.  Settings what I felt had no use in tweaking the RAM for added performance, or which are self-explanatory, have been left out.  The above tests were performed on Corsair XMS PC4000 RAM @ 264 MHz, CL2.5-3-4-6 1T.     

    Quote
    Hm...I wonder which one is telling the truth, the BIOS or A64 tweaker.
    I've wondered this myself.  From my understanding it the next logic step from the WCREDIT programs.  I understand how clock gen can misreport frequency because it's probably not measuring frequency itself but rather a mathmatical representation of a few numbers it's gathered and one clk frequency(HTT maybe?), and the non supported dividers messes up the math...but I think the tweaker just extracts hex values strait from the registers and displays in "English", I mean it could be wrong, but seeing how I watch the BIOS on The SLI Plat change the memory timings in the POST screen to values other then SPD when it Auto with agressive timings in disabled, I actually want to side with the A64 tweaker in this case.
    Hey anyone know what Tref in A64 relates to in the BIOS.  i.e 200 1.95us = what in the BIOS.  1x4028, 1x4000, I'm just making up numbers here but it's different then 200 1.95, last time I searched I didn't find anything.  Well I found ALOT but not waht I wanted..

  • Difference between Temp table and Variable table and which one is better performance wise?

    Hello,
    Anyone could you explain What is difference between Temp Table (#, ##) and Variable table (DECLARE @V TABLE (EMP_ID INT)) ?
    Which one is recommended to use for better performance?
    also Is it possible to create CLUSTER and NONCLUSTER Index on Variable table?
    In my case: 1-2 days transactional data are more than 3-4 Millions. I tried using both # and table variable and found table variable is faster.
    Is that Table variable using Memory or Disk space?
    Thanks Shiven:) If Answer is Helpful, Please Vote

    Check following link to see differences b/w TempTable & TableVariable: http://sqlwithmanoj.com/2010/05/15/temporary-tables-vs-table-variables/
    TempTables & TableVariables both use memory & tempDB in similar manner, check this blog post: http://sqlwithmanoj.com/2010/07/20/table-variables-are-not-stored-in-memory-but-in-tempdb/
    Performance wise if you are dealing with millions of records then TempTable is ideal, as you can create explicit indexes on top of them. But if there are less records then TableVariables are good suited.
    On Tables Variable explicit index are not allowed, if you define a PK column, then a Clustered Index will be created automatically.
    But it also depends upon specific scenarios you are dealing with , can you share it?
    ~manoj | email: http://scr.im/m22g
    http://sqlwithmanoj.wordpress.com
    MCCA 2011 | My FB Page

  • How do I improve performance while doing pull, push and delete from Azure Storage Queue

           
    Hi,
    I am working on a distributed application with Azure Storage Queue for message queuing. queue will be used by multiple clients across the clock and thus it is expected that it would be heavily loaded most on the time in usage. business case is typical as in
    it pulls message from queue, process the message then deletes the message from queue. this module also sends back a notification to user indicating process is complete. functions/modules work fine as in they meet the logical requirement. pretty typical queue
    scenario.
    Now, coming to the problem statement. since it is envisaged that the queue would be heavily loaded most of the time, I am pushing towards to speed up processing of the overall message lifetime. the faster I can clear messages, the better overall experience
    it would be for everyone, system and users.
    To improve on performance I did multiple cycles for performance profiling and then improving on the identified "HOT" path/function.
    It all came down to a point where only the Azure Queue pull and delete are the only two most time consuming calls outside. I can further improve on pull, which i did by batch pulling 32 message at a time (which is the max message count i can pull from Azure
    queue at once at the time of writing this question.), this returned me a favor as in by reducing processing time to a big margin. all good till this as well.
    i am processing these messages in parallel so as to improve on overall performance.
    pseudo code:
    //AzureQueue Class is encapsulating calls to Azure Storage Queue.
    //assume nothing fancy inside, vanila calls to queue for pull/push/delete
    var batchMessages = AzureQueue.Pull(32); Parallel.ForEach(batchMessages, bMessage =>
    //DoSomething does some background processing;
    try{DoSomething(bMessage);}
    catch()
    //Log exception
    AzureQueue.Delete(bMessage);
    With this change now, profiling results show that up-to 90% of time is only taken by the Azure Message delete calls. As it is good to delete message as soon as processing is done, i remove it just after "DoSomething" is finished.
    what i need now is suggestions on how to further improve performance of this function when 90% of the time is being eaten up by the Azure Queue Delete call itself? is there a better faster way to perform delete/bulk delete etc?
    with the implementation mentioned here, i get speed of close to 25 messages/sec. Right now Azure queue delete calls are choking application performance. so is there any hope to push it further.
    Does it also makes difference in performance which queue delete call am making? as of now queue has overloaded method for deleting message, one which except message object and another which accepts message identifier and pop receipt. i am using the later
    one here with message identifier nad pop receipt to delete message from queue.
    Let me know if you need any additional information or any clarification in question.
    Inputs/suggestions are welcome.
    Many thanks.

    The first thing that came to mind was to use a parallel delete at the same time you run the work in DoSomething.  If DoSomething fails, add the message back into the queue.  This won't work for every application, and work that was in the queue
    near the head could be pushed back to the tail, so you'd have to think about how that may effect your workload.
    Or, make a threadpool queued delete after the work was successful.  Fire and forget.  However, if you're loading the processing at 25/sec, and 90% of time sits on the delete, you'd quickly accumulate delete calls for the threadpool until you'd
    never catch up.  At 70-80% duty cycle this may work, but the closer you get to always being busy could make this dangerous.
    I wonder if calling the delete REST API yourself may offer any improvements.  If you find the delete sets up a TCP connection each time, this may be all you need.  Try to keep the connection open, or see if the REST API can delete more at a time
    than the SDK API can.
    Or, if you have the funds, just have more VM instances doing the work in parallel, so the first machine handles 25/sec, the second at 25/sec also - and you just live with the slow delete.  If that's still not good enough, add more instances.
    Darin R.

  • Upgraded both computers in the household and found Lion is too disruptive to workflow.  Do I turn in the new laptop for a pre-Lion rebuild to keep Snow Leopard, or do I return computer and get upgraded memory to improve performance of existing MacBookPro?

    Upgraded both computers in the household and found Lion is too disruptive to workflow. 
    Do I turn in the new laptop for a pre-Lion rebuild to keep Snow Leopard, or do I return new computer and get upgraded memory to improve performance of existing MacBookPro?  I'm mostly still happy with existing MacBookPro, but Aperture doesn't work, the computer can't handle it.
    Other possibility is setting up virtual machine with Snow Leopard server software on new computer.
    Any opinions on what would allow moving forward with the least hassle and best workflow continuity?

    hi,
    what year and specs does the MBP have

  • Diff between paper layout and weblayout in terms of performance

    Hi All,
    1.Can any one tell me diff between paper layout and weblayout in terms of performance.
    2.Can I save my rdf as jsp.Will there any performance diff in real time.
    Regards
    Srinivas

    Hi Rainer,
    Thanks for your reply.
    1.You said Paper layout is not a good choice for html-output ,Can you give some information on this,also if you have any documents or links supporting this please
    share with me.
    2.I designed my reports using web layout,my req is now to print page header in every page of the report,I am unable to do this as I didn't find any html tag to do this.kindly suggest a solution for this
    Regards
    Srinivas

Maybe you are looking for

  • I've lost the menu bar and want to get it back

    I used the task bar to remove the menu bar and now it is gone and I can't use the menu to get it back. I've lost the web address bar everything so can't use the webpage. I would uninstall but it doesn't come up in my list of programs. I tried deletin

  • Referring to SQL comment table in Script Logic

    Dear all,   If i have the following combination of dimensions, i.e. "DimA, DimB and DimC", and they will point to a comment in the comment table in SQL, how do i grab the [Comment] from the comment table to be used in my script logic.   I was adviced

  • Why exp an dmp file does not include scripts

    Hi, I backed up an oracle file using exp in prompt command but when i imported that file i found only the table i had created but not the sctripts i had used. Why is that;

  • BeX MatchCode Variable

    Hi folks, When I create a variable, I see in filter features that you can choose to show all master data, data in infoobject and something that use the relations between infoobjects. I woul like to make a selection on master so that some records were

  • Please recommend GPS App for Tour

    Hello, I have Verizon Wireless and would like to purchase a GPS for my BlackBerry Tour.  I see that there are a lot of different options.  I simply want a GPS that works like a Garmin, where it displays the streets and has a voice that tells me when