Best practice - a question on how best do somethin...

Hi,
The problem:  To be able to geotag my location, adding a pin and some text to mark a particular point of interest at that location and then be able to navigate back to it in future.  Ovi maps is not available in-browser which was my first choice (where you can add a POI) and I cannot seem to do this in Ovi Maps for N900.
(Of course the ideal for me would be to have  desktop widget which I could press and it would mark my GPS location on a map with space for a comment but I know this isn't a wish list :-)
Could I please ask people their approach to this problem, what software they use and how. 
Many thanks
Tom

Would be a nice feature. Not sure exactly what's coming in future updates or when Meego hits the servers. But someone over at Maemo.org forums may be able to suggest a couple of apps that are available or being developed.

Similar Messages

  • Real time logging: best practices and questions ?

    I've 4 couples of DS 5.2p6 in MMR mode on Windows 2003.
    Each server is configured with the default setting of "nsslapd-accesslog-logbuffering" enabled, and the log files are stored on a local file system, then later centrally archived thanks to a log sender daemon.
    I've now a requirement from a monitoring tool (used to establish correlations/links/events between applications) to provide the directory
    server access logs in real time.
    At a first glance, each directory generates about 1,1 Mb of access log per second.
    1)
    I'd like to know if there're known best practices / experiences in such a case.
    2)
    Also, should I upgrade my DS servers to benefit from any log management related feature ? Should I think about using an external disk
    sub-sytem (SAN, NAS ....) ?
    3)
    In DS 5.2, what's the default access logbuffering policy : is there a maximum buffer size and/or time limit before flushing to disk ? Is it configurable ?

    Usually log-buffering should be enabled. I don't know of any customers who turn it off. Even if you do, I guess it should be after careful evaluation in your environment. AFAIK, there is no configurable limit for buffer size or time limit before it is committed to disk
    Regarding faster disks, I had the bright idea that you could creating a ramdisk and set the logs to go there instead of disk. Let's say the ramdisk is 2gb max in size and you receive about 1MB/sec in writes. Say max-log-size is 30MB. You can schedule a job to run every minute that copies over the newly rotated file(s) from ramdisk to your filesystem and then send it over to logs HQ. If the server does crash, you'll lose upto a minute of logs. Of course, the data disappears after reboot, so you'll need to manage that as well. Sounds like fun to try but may not be practical.
    Ramdisk on windows
    [http://msdn.microsoft.com/en-us/library/dd163312.aspx]
    Ramdisk on solaris
    [http://wikis.sun.com/display/BigAdmin/Talking+about+RAM+disks+in+the+Solaris+OS]
    [http://docs.sun.com/app/docs/doc/816-5166/ramdiskadm-1m?a=view]
    I should ask, how realtime should this log correlation be?
    Edited by: etst123 on Jul 23, 2009 1:04 PM

  • Best Practice type question

    Our environment currently has 1 Connection factory defined per JMS Module. We also have multiple queues per JMS Module.
              In other J2EE AppServer environments I have worked in, we defined 1 connection factory per queue.
              Can someone explain if there a best practice, or at least a good reason for doing one over the other.
              The environment here is new enough that we can change how things are set up if it makes sense to do so.

    My two cents: A CF allows configuration of client load-balancing behavior, flow-control behavior, default QOS, etc. I think its good to have one or more CFs configured per module, as presumably all destinations in the module are related in some way, and they therefore likely all require basically the same client behavior. If you have very few destinations, then it might help to have one CF per destination, but this places a bit more burden on the administrator to configure the extra CFs in the first place, and on everyone to remember which CF is best for communicating with which destination.
              Tom

  • Best Practices needed -- question regarding global support success stories

    My customer has a series of Go Lives scheduled throughout the year and is now concerned about an October EAI (Europe, Asia, International) go live.  They wish to discuss the benefits of separating a European go Live from an Asia/International go live in terms of support capabilities and best practices.  The European business is definitely larger and more important than the Asia/International business and the split would allow more targeted focus on Europe.  My customer does not have a large number of resources to spare and is starting to think that supporting the combined go live may be too much (i.e., too much risk to the businesses) to handle.
    The question for SAP is regarding success stories and best practices.
    From a global perspective, do we recommend this split?  Do most of our global customers split a go live in Europe from a go live in Asia/International (which is Australia, etc.).  Can I reference any of these customers?  If the EAI go live is not split, what is absolutely necessary for success, etc, etc?  For example, if a core team member plus local support is required in each location, then this may not be possible with the resources they have u2026u2026..
    I would appreciate any insights/best practices/success stories/or u201Cwaru201D stories you might be aware of.
    Thank you in advance and best regards,
    Barbara

    Hi, this is purely based on customer requirement.
    I  have a friend in an Organization which went live in 38 centers at the same time.
    With the latest technologies in networking, distances does not make any difference.
    The Organization where I currently work in, has global business locations. In my current organization the go live was in phases. Here they went live in the region where the business was maximum first because this region was their largest and most important as far as revenue was concerned. Then after stabilizing this region, a group of consultants went to the rest of the regions for the go live in that region.
    Both the companies referred above are successfully into SAP and are leading partners with SAP. Unfortunately I am not authorized to give you the names of the Organizations as a reference for you as you requested.
    But in your case if you have shortage of manpower, you can do it in phases by first going live in the European Market and then in phases you can go live in the other regions.
    Warm Regards

  • Eclipse / Workshop dev/production best practice environment question.

    I'm trying to setup an ODSI development and production environment. After a bit of trial and error and support from the group here (ok, Mike, thanks again) I've been able to connect to Web Service and Relational database sources and such. My Windows 2003 server has 2 GB of RAM. With Admin domain, Managed Server, and Eclipse running I'm in the 2.4GB range. I'd love to move the Eclipse bit off of the server, develop dataspaces there, and publish them to the remote server. When I add the Remote Server in Eclipse and try to add a new data service I get "Dataspace projects cannot be deployed to a remote domain" error message.
    So, is the best practice to run everything locally (admin server, Eclipse/Workshop). Get everything working and then configure the same JDBC (or whatever) connections on the production server and deploy the locally created dataspace to the production box using the Eclipse that's installed on the server? I've read some posts/articles about a scripting capability that can perhaps do the configuration and deployment but I'm really in the baby steps mode and probably need the UI for now.
    Thanks in advance for the advice.

    you'll want 4GB.
    - mike

  • Best Practices CS6 - Multicam Edit: How to use a key?

    I'm trying to figure out what's the best way to use the Multi-cam editing function of CS6 PP, while applying a key to the primary video track.
    The workflow I have is V1 contains the background that I want to key into V2. I applied UltraKey to V2, did the key (works great), and now I want to use the multicam editing to cut between the finished keyed V2 track, and video on V3. The way I've always used the Multicam editing in the past is to select the video clips from the clip browser (not the timeline) to create a multicam source sequence - but if I have to apply a key first, I can't do that. I thought of nesting V1 and V2 together after the key is applied, but then I can't make adjustments to the key after the final composite is made (or can I?)
    Any thoughts about what works best in this scenario? appreciate the help.

    I don't think my system could support live keying. I'd prefer to apply the key first but this is where I had a problem. Let's assume for a minute that there's no key. In a typical MCS workflow, I'd just select the clips from the browser, create the MCS, do the live cuts and be done with it. If I can still do that it's fine, but would that screw everything up in the MCS if I move all the video up one track because the background for the key has to be in front of the video that it's being applied to? I hope I'm making sense here.

  • Database best practice theoretical question

    Hi Forums
    I wasn't sure which forum to post this under- general non-specific Database question.
    I have a client server database app, which is working well with a single network client currently. I need to implement some kind of database record locking scheme to avoid concurrency problems. I've worked on these kind of apps. before, but never one this complicated. Obviously I've scourged the web looking into this, but found not too much good reference material.
    I was thinking of making some kind of class with several lists- one for each table that requires locking having record locks whilst transactions are occuring. Then having clients request locks on records, then release locks when they are finished.
    I'm wondering if client requests a lock..then crashes without releasing locks. Should there be a timeout set to enable server release record? Should the client be polling the server to show its still working on the record? This sounds a mess to me!
    If anyone can direct me to some good reference material on this I would appreciate it.
    br
    FM
    EDT* OK now I feel silly! Theres a great article on [Wiki |http://en.wikipedia.org/wiki/Concurrency_control] showing some often used methods.
    Edited by: fenderman on Mar 6, 2010 3:13 AM

    kev374 wrote:
    thanks for the response, yes there are many columns that violate 3NF and that is why the column count is so high.
    Regarding the partition question, by better I meant by using "interval" the partitions could be created automatically at the specified interval instead of having to create them manually.The key is to understand the logic behind these tables and columns. Why it was designed like this, if it's business requirement then 200 some columns are not bad, if it's design flaw 20 columns could be too much. It's not necessarily always good to have strict 3NF design, sometime for various reason you can denormalize the tables to get better performance.
    As to partitioning Q, you have to do the rolling window (drop/add partition as time goes by) type of partitioning scheme manually so far.

  • IP over Infiniband network configuration best practices

    Hi EEC Team,
    A question I've been asked a few times, do we have any best practices or ideas on how best to implement the IPoIB network?
    Should it be Class B or C?
    Also, what are your thoughts in regards to the netmask, if we use /24 it doesn't give us the ability to visually separate two different racks (ie Exalogic / Exadata), whereas netmask /23, we can do something like:
    Exalogic : 192.168.*10*.0
    Exadata : 192.168.*11*.0
    While still being on the same subnet.
    Your thoughts?
    Gavin

    I think it depends on a couple of factors, such as the following:
    a) How many racks will be connected together on the same IPoIB fabric
    b) What rack configuration do you have today, and do you foresee any expansion in the future - it is possible that you will move from a purely physical environment to a virtual environment, and you should consider the number of virtual hosts and their IP requirements when choosing a subnet mask.
    Class C (/24) with 256 IP values is a good start. However, you may want to choose a mask of length 23 or even 22 to ensure that you have enough IPs for running the required number of WLS, OHS, Coherence Server instances on two or more compute nodes assigned to a department for running its application.
    In general, when setting a net mask, it is always important that you consider such growth projections and possibilities.
    By the way, in my view, Exalogic and Exadata need not be in the same IP subnet, especially if you want to separate application traffic from database traffic. Of course, they can be separated by VLANs too.
    Hope this helps.
    Thanks
    Guru

  • NAC best practices

    Hello --
    Is there a best practice doc that explains how best to deploy NAC? Also, is it possible to deploy the NAC agent remotely?
    Please let me know.
    Thanks,
    Ohamien

    From my experience there are so many configurable parts and ways to approach NAC that it truly will depend on your situation. The best practice I can suggest is to read as many sources as possible. You may also want to check out the latest book on the NAC appliance from the Cisco Press:
    Cisco NAC Appliance, Enforcing Host Security with Clean Access by Jamey Heary.
    You should check out these links, too:
    http://cisconac.blogspot.com/
    http://www.networkworld.com/community/heary
    http://blog.tenablesecurity.com/
    Hope this helps.

  • How do you install Best Practices Install Assistant (BPIA)?

    Hi
    I've been looking into setting up Yard Management in our SAP system and have found documentation / software for ECC5 - fore this version it looks like I would need to install the Best Practices Wholesale Distribution Package with the necessary BC sets / eCatts from W70 Yard Management, and can use Best Practices Install Assistant (BPIA) which I think is installed via SAINT???
    HOWEVER - I've just realised that the version of SAP to be installed on is only 6.20/4.7. I think Yard Management is still available for this release but doesn't look like Best Practices / BPIA is used - how is this installed and where can I find the documentation / software?
    Cheers
    Ross
    Edited by: Ross Armstrong on Jun 18, 2008 9:00 AM

    There are two option for downloading games or any application in BlackBerry device.
    One
    is using the OTA (Over the Air) link, provided by the application
    owner, from where you can directely install games/application.
    Another
    is using Desktop Manager's Application Loader. In that case save the
    game/application files in a folder of your PC/laptop. Use Application
    Loader and find the location of the folder and click Open.
    tanzim                                                                                  
    If your query is resolved then please click on “Accept as Solution”
    Click on the LIKE on the bottom right if the post deserves credit

  • Best Practice: Where to put Listeners?

    In a Swing application the typical way of handling events is to add a listener. So far, so good.
    But where to locate the listeners? Obviously there are a lot of possible solutions:
    - Scatter them all over your code, like using anonymous listeners.
    - Implement all of them in a single, explicit class.
    - Only uses windows as listeners.
    - etc.
    The intention of my question is not to get a rather long list of more ideas, or to get pros or cons of any of the above suggestion. My actual question is: Is there a best practice for where to locate a listener's implementation? I mean, after decades of Swing and thousands of Swing based applications, I am sure that there must be a best practice where to but listeners implementations.

    mkarg wrote:
    In a Swing application the typical way of handling events is to add a listener. So far, so good.
    But where to locate the listeners? Obviously there are a lot of possible solutions:
    - Scatter them all over your code, like using anonymous listeners.
    - Implement all of them in a single, explicit class.
    - Only uses windows as listeners.
    - etc.
    The intention of my question is not to get a rather long list of more ideas, or to get pros or cons of any of the above suggestion. My actual question is: Is there a best practice for where to locate a listener's implementation? I mean, after decades of Swing and thousands of Swing based applications, I am sure that there must be a best practice where to but listeners implementations.You've asked other similar questions about best practices. No matter how long Swing has been around, people still program in a variety of ways, and there are lots of areas where there are several equally correct ways of doing things. Each way has its pros and cons, and the specific situation drives towards one way or the other. One's best practice of using anonymous listeners will be another's code smell. One's best practice of using inner class will be another's hassle.
    So you will probably only get opinions, and likely not universally recognized best practices.
    That being said, here is my opinion (nothingmore than that, but it has a high value to me :o) :
    In yous list of options, one thing that is more likely to form a consensus against it, is "only uses window as listeners". I assume you mean each frame is implemented as a MyCustomFrame extends JFrame , and add this as listener on all contained widgets.
    This option is disregarded because
    1) extending JFrame is generally not a meaningful use of inheritance (that point is open to debate, as it is quite handy)
    2) register the same object to serve as a listener for several widgets makes the implementation of listener callbacks awkward (lots of if-then-else). See [that thread|http://forums.sun.com/thread.jspa?forumID=57&threadID=5395604] for more arguments.
    Now, no matter what style of listeners you choose, your listeners shouldn't do too much work (how much is too much is also open to debate...):
    if a listener gets complicated, you should simplify it by making it a simple relay, that transform low-level graphical events into functional events to be processed by a higher-level class (+Controller+). I find the Mediator pattern to be a best practice for "more-than-3-widgets" interactions. As the interactions usually involves also calls to the application model, the mediator becomes a controller.
    With that in mind, sort anonymous listeners are fine: the heavy work will be performed by the mediator or controller, and that latter is where maintenance will occur. So "Scatter them all over your code" (sounds quite pejorative) is not much of an issue: you have to hook the widget to the behavior somewhere anyway, the shorter, the best.
    For simpler behavior, see the previous reply which gives perfect advice.

  • Best Practice - OSS IDs

    We are trying to implement a good process for requesting, checking out, and assigning the OSS ID's.  Can you please let us know what you do or perhaps best practice of this?
    Do you have SAP_ALL assigned to OSSUSER IDs?
    Do you require approval from a manager prior to using?
    Does your ID's expire in SU01 valid-to date?
    How do you designate an OSS ID is for a particular message or ticket?
    Do people request these before they log tickets or after when SAP wants this information?
    Are you using Fire Fighter to track what they do?
    Are you reviewing their activity? If so, how often?
    We want to make the SOX auditors happy with our process. Any information will be helpful.
    Thanks,
    Carly

    Hi Carly,
    Please see my response below u2013
    Do you have SAP_ALL assigned to OSSUSER IDs? u2013 SAP_ALL should be given to only select few (depending on role) & not to all.
    Do you require approval from a manager prior to using? Yes, approval mechanism should be in place.
    Does your ID's expire in SU01 valid-to date?- Yes. This is one of the best practices to follow.
    How do you designate an OSS ID is for a particular message or ticket? OSS Id is for a SAP-user. It is used for posting messages to SAP, downloading OSS Notes/documents from SCN. For tickets, you have your regular SAP user ids. Both need to be different. (Not sure if I understood your question fully)
    Do people request these before they log tickets or after when SAP wants this information? We can create OSS IDs for teams immediately after SAP is installed.
    Are you using Fire Fighter to track what they do? - No
    Are you reviewing their activity? If so, how often? u2013 Yes, fortnightly reviews.
    Hope this helps.
    Thanks,
    Prashant

  • RAID Level Configuration Best Practices

    Hi Guys ,
       We are building new Virtual environment for SQL Server and have to define RAID level configuration for SQL Server setup.
    Please share your thoughts for RAID configuration for SQL data, log , temppdb, Backup files .
    Files  RAID Level 
    SQL Data File -->
    SQL Log Files-->
    Tempdb Data-->
    Tempdb log-->
    Backup files--> .
    Any other configuration best practices   are more then welcome . 
    Like Memory Setting at OS level , LUN Settings. 
    Best practices to configure SQL Server in Hyper-V with clustering.
    Thank you
    Please Mark As Answer if it is helpful. \\Aim To Inspire Rather to Teach A.Shah

    Hi,
    If you can shed some bucks you should go for RAID 10 for all files. Also as a best practice keeping database log and data files on different physical drive would give optimum performance. Tempdb can be placed with data file or on a different drive as per
    usage. Its always good to use dedicated drive for tempdb
    For memory setting.Please refer
    This link for setting max server memory
    You should monitor SQL server memory usage using below counters taken from
    this Link
    SQLServer:Buffer Manager--Buffer Cache hit ratio(BCHR): IIf your BCHR is high 90 to 100 Then it points to fact that You don't have memory pressure. Keep in mind that suppose somebody runs a query which request large amount of pages in that
    case momentarily BCHR might come down to 60 or 70 may be less but that does not means it is a memory pressure it means your query requires large memory and will take it. After that query completes you will see BCHR risiing again
    SQLServer:Buffer Manager--Page Life Expectancy(PLE): PLE shows for how long page remain in buffer pool. The longer it stays the better it is. Its common misconception to take 300 as a baseline for PLE.   But it is not,I read it from
    Jonathan Kehayias book( troubleshooting SQL Server) that this value was baseline when SQL Server was of 2000 version and max RAM one could see was from 4-6 G. Now with 200G or RAM coming into picture this value is not correct. He also gave the formula( tentative)
    how to calculate it. Take the base counter value of 300 presented by most resources, and then determine a multiple of this value based on the configured buffer cache size, which is the 'max server memory' sp_ configure option in SQL Server, divided by 4 GB.
      So, for a server with 32 GB allocated to the buffer pool, the PLE value should be at least (32/4)*300 = 2400. So far this has done good to me so I would recommend you to use it.  
    SQLServer:Buffer Manager--CheckpointPages/sec: Checkpoint pages /sec counter is important to know about memory pressure because if buffer cache is low then lots of new pages needs to be brought into and flushed out from buffer pool, 
    due to load checkpoint's work will increase and will start flushing out dirty pages very frequently. If this counter is high then your SQL Server buffer pool is not able to cope up with requests coming and we need to increase it by increasing buffer pool memory
    or by increasing physical RAM and then making adequate changes in Buffer pool size. Technically this value should be low if you are looking at line graph in perfmon this value should always touch base for stable system.  
    SQLServer:Buffer Manager--Freepages: This value should not be less you always want to see high value for it.  
    SQLServer:Memory Manager--Memory Grants Pending: If you see memory grants pending in buffer pool your server is facing SQL Server memory crunch and increasing memory would be a good idea. For memory grants please read this article:
    http://blogs.msdn.com/b/sqlqueryprocessing/archive/2010/02/16/understanding-sql-server-memory-grant.aspx
    SQLServer:memory Manager--Target Server Memory: This is amount of memory SQL Server is trying to acquire.
    SQLServer:memory Manager--Total Server memory This is current memory SQL Server has acquired.
    For other settings I would like you to discuss with vendor. Storage questions IMO should be directed to Vendor.
    Below would surely be a good read
    SAN storage best practice For SQL Server
    SQLCAT best practice for SQL Server storage
    Please mark this reply as answer if it solved your issue or vote as helpful if it helped so that other forum members can benefit from it.
    My TechNet Wiki Articles

  • Best Practice for Significant Amounts of Data

    This is basically a best-practice/concept question and it spans both Xcelsius & Excel functions:
    I am working on a dashboard for the US Military to report on some basic financial transactions that happen on bases around the globe.  These transactions fall into four categories, so my aggregation is as follows:
    Year,Month,Country,Base,Category (data is Transaction Count and Total Amount)
    This is a rather high level of aggregation, and it takes about 20 million transactions and aggregates them into about 6000 rows of data for a two year period.
    I would like to allow the users to select a Category and a country and see a chart which summarizes transactions for that country ( X-axis for Month, Y-axis Transaction Count or Amount ).  I would like each series on this chart to represent a Base.
    My problem is that 6000 rows still appears to be too many rows for an Xcelsius dashboard to handle.  I have followed the Concatenated Key approach and used SUMIF to populate a matrix with the data for use in the Chart.  This matrix would have Bases for row headings (only those within the selected country) and the Column Headings would be Month.  The data would be COUNT. (I also need the same matrix with Dollar Amounts as the data). 
    In Excel this matrix works fine and seems to be very fast.  The problem is with Xcelsius.  I have imported the Spreadsheet, but have NOT even created the chart yet and Xcelsius is CHOKING (and crashing).  I changed Max Rows to 7000 to accommodate the data.  I placed a simple combo box and a grid on the Canvas u2013 BUT NO CHART yet u2013 and the dashboard takes forever to generate and is REALLY slow to react to a simple change in the Combo Box.
    So, I guess this brings up a few questions:
    1)     Am I doing something wrong and did I miss something that would prevent this problem?
    2)     If this is standard Xcelsius behavior, what are the Best Practices to solve the problem?
    a.     Do I have to create 50 different Data Ranges in order to improve performance (i.e. Each Country-Category would have a separate range)?
    b.     Would it even work if it had that many data ranges in it?
    c.     Do you aggregate it as a crosstab (Months as Column headings) and insert that crosstabbed data into Excel.
    d.     Other ideas  that Iu2019m missing?
    FYI:  These dashboards will be exported to PDF and distributed.  They will not be connected to a server or data source.
    Any thoughts or guidance would be appreciated.
    Thanks,
    David

    Hi David,
    I would leave your query
    "Am I doing something wrong and did I miss something that would prevent this problem?"
    to the experts/ gurus out here on this forum.
    From my end, you can follow
    TOP 10 EXCEL TIPS FOR SUCCESS
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/204c3259-edb2-2b10-4a84-a754c9e1aea8
    Please follow the Xcelsius Best Practices at
    https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/a084a11c-6564-2b10-79ac-cc1eb3f017ac
    In order to reduce the size of xlf and swf files follow
    http://myxcelsius.com/2009/03/18/reduce-the-size-of-your-xlf-and-swf-files/
    Hope this helps to certain extent.
    Regards
    Nikhil

  • SAP CRM V1.2007 Best Practice

    Hello,
    we are preparing the installation of a new CRM 2007 system and we want to have a good demo system.
    We are considering too options;
    . SAP CRM IDES
    . SAP CRM Best Practice
    knwoing that we have an ERP 6.0 IDES system we want to connect to.
    The Best Practice seems to have a lot of preconfigured scenarios that will not be available in the IDES system (known as the "SAP all in one").
    How can we start the automatic installation of the scenarios (with solution builder) connecting to the ERP IDES system?
    Reading the BP Quick guide, it is mentioned that in order to have the full BP installation we need to have a ERP system with another Best Practice package.
    Will the pre customized IDES data in ERP be recognized in CRM?
    In other words, is the IDES master data, transactional data and organizational structure the same as the Best Practice package one?
    Thanks a lot in advance for your help
    Benoit

    Thanks a lot for your answer Padma Guda,
    The difficult bit in this evaluation is that we don't know exactly the
    difference between the IDES and the Best Practice. That is to say,
    what is the advantage to have a CRM Best Practice connected to an ERP
    IDES as opposed to a CRM IDES system connected to a ERP IDES system?
    As I mentioned, we already have an ERP IDES installed as back end system.
    I believe that if we decide to use the ERP IDES as the ERP back end, we will loose some of the advantage of having an ERP Best practice connected to an CRM best practice e.g. Sales area already mapped and known by the CRM system, ERP master data already available in CRM, transactional data already mapped, pricing data already mapped etc.
    Is that righ? Or do we have to do an initial load of ERP in all cases?

Maybe you are looking for

  • HP Smart Web Printing only works on Explorer not Firefox 4 are you going to update this as I do not want to change back to Explorer.

    I just downloaded the HP Smart Web Printing which is free for those who purchase printers. It provides ability to select and clip or print web pages. It works on Explorer but not Firefox 4. There are many such questions on forums in both HP and Firef

  • A Series of Unfortunate Events

    Hello All, I do not usually post concerns or issues but my experience with the customer service of Verizon for the past few months has been so disappointing I felt it necessary to share with customers and warn other Verizon users. In November, I deci

  • FLASH 11.3.300.268 won't work without Chrome

    There seems to be an issue with those who do not use Chrome trying to access FLASH from a location without using Chrome directly. It appears that if you choose to upload Chrome with FLASH on the Adobe page, FLASH is embedded in Chrome. It doesn't eve

  • Can't register macbook air

    Hi, I just purchased a macbook air through an auction site, it is brand new, fresh out of the box.  My problem is that I cant register my laptop because I don't have the original purchase date.  I wanted to register it so I can purchase the extended

  • Query for access level

    There is table er_user_groups.ER_USER_GROUPS has username and group_id as fields. This table contains data about all the groups which user belongs to. If user is in each of group 29,30,31,32 then access level is basic.Please help me with Sql for this