Best practices to share 4 printers on small network running Server 2008 R2 Standard (service pack 1)

Hello, 
I'm a new IT admin at a small company (10-12 PCs running Windows 7 or 8) which has 4 printers. I'd like to install the printers either connected to the server or as wireless printers (1 is old enough to require
a USB connection to a PC, no network capability), such that every PC has access to each printer.
Don't worry about the USB printer - I know it's not the best way to share a printer, but it's not a critical printer; I just want it available when its PC is on.
I've read a lot about the best way to set up printers, including stuff about group policy and print server, but I am not a network administrator, and I don't really understand any of it. I'd just like to install
the drivers on the server or something, and then share them. Right now all the printers do something a little different: one is on a WSD port, two has a little "shared" icon, one has the icon but also a "network" icon... it's very confusing.
Can anyone help me with a basic setup that I can do for each printer?
p.s. they all have a reserved IP address.
Thanks,
Laura

may need to set print server... maybe helpful.
http://www.techiwarehouse.com/engine/9aa10a93/How-to-Share-Printer-in-Windows-Server-2008-R2
http://blogs.technet.com/b/yongrhee/archive/2009/09/14/best-practices-on-deploying-a-microsoft-windows-server-2008-windows-server-2008-r2-print-server.aspx
http://joeit.wordpress.com/2011/06/08/how-do-i-share-a-printer-from-ws2008-r2-to-x86-clients-or-all-printers-should-die-in-a-fire/
Best,
Howtodo

Similar Messages

  • Best practice for install oracle 11g r2 on Windows Server 2008 r2

    Dear all,
    May I know what is the best practice for install oracle 11g r2 on windows server 2008 r2. Should I create a special account for windows for the oracle database installation? What permission should I grant to the folders where Oracle installed and the database related files located (datafiles, controlfiles, etc.)
    Just grant Full for Administrators and System and remove permissions for all others accounts?
    Also how should I configure windows firewall to allow client connect to the database.
    Thanks for your help.

    Hi Christian,
    Check this on MOS
    *RAC Assurance Support Team: RAC Starter Kit and Best Practices (Windows) [ID 811271.1]*
    https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=BULLETIN&id=811271.1
    DOC Modified: 14-DEC-2010
    Regards,
    Levi Pereira

  • Best practice to share user defined reports?

    What is the best practice to share user defined reports among team members?
    Currently I can create a folder under the Reports/User Defined Reports system folder and any reports or subfolders under that can be exported into xml by right-clicking on the folder and selecting 'Save As'. Then sending the file over or put it on a shared point or check in to SVN, then the other team members can right click on the User Defined Reports system folder and chose "Open report..." and select the xml file.
    Is there any more elegant way?
    Is it a good idea to share the C:\Users\lgal\AppData\Roaming\SQL Developer\UserReports.xml on Windows, or ~/.sqldeveloper/UserReports.xml (on Linux) among users?
    Is there a standard way to bring this under standard SVN control in SD?

    I think the best thing would be if someone set up a config sharing site!
    It would be very easy to do on system like drupal or e107.  People could register and submit their configs to different download sections based on what sort of configs they are.  My site runs e107 so it's not hard to see how easy it would be
    It would be easy to set up - I could do it in about 30mins but I think I have done more than enough of that!  There seems no need to officialize it to any degree and burden the server even more.
    I guess user-cb could host it couldn't?  If you restrict people to posting only small config files with no icons and link to big stuff offsite it should be tiny bandwidth usage.
    That's what I would do anyway

  • Best practices for multiple users on a network?

    We have a centralized server storing all of our media, and a Final Share system allowing about 10 clients to connect.  The details of the system aren't important, it mounts just like a local drive on each client and bandwidth is higher than FW800.
    My question is what is the best way to handle cache files, sidecar files, autosave, etc. when passing edits between editors?  What should be stored locally, what can be placed on the server?  Of course all media has to be placed on the server, but can everything else sit alongside the footage? 
    The reason I ask is we started by pointing everything at the server, but now are having the good old Serious Error whenever two editors (in different project files) are referencing the same media.  We'd love to resolve this... assistant editors can't log footage if doing so causes the editor's project file to lock up. 
    Thanks!
    P.S. first tried called adobe phone support about this... not making that mistake again.  The most apathetic customer service rep I think I've ever spoken with told me they wouldn't provide support because our Serious Error was the result of network issues and not their fault.  And then that we should be storing our media files locally.  I didn't see it necessary to mention that local storage on 10 machines isn't that viable with almost 50TB of data.

    klonaton wrote:
    The reason I ask is we started by pointing everything at the server, but now are having the good old Serious Error whenever two editors (in different project files) are referencing the same media.  We'd love to resolve this... assistant editors can't log footage if doing so causes the editor's project file to lock up.
    Hi Klonaton.
    Are you using two Premiere projects or one Prelude project (for logging) and one Premiere project (for editing) ? I'll have to check tomorrow to be absolutely sure but we do sometimes have simultaneous logging and editing with -respectively- Prelude and Premiere through our small network, with no issues.
    Changes to metadata and comment markers created into Prelude instantly appears and update into Premiere. Subclip don't and have to be sent from Prelude to Premiere on the same computer (or the media has to be imported again inside Premiere). I think it's normal considering how subclips work a bit differently in Prelude and Premiere.
    Every files are on a thunderbolt raid drive shared on the network through an iMac. The Media Cache and Media Cache DB are on the network shared drive too, and common to every users and computers.
    Also I don't get if crashes happen when your two projects are running simultaneously or not (in the latter case, that's a huge problem).

  • Best practiceS for setting up Macs on Network

    Greetings.
    We have six Macs on our Windows Server network; three iMacs and three laptops. We have set up all the machines and they are joined to the Active Directory. In the past, we have always created local users on the machines and then "browsed" to the server shares and mounted the them. We've learned things have improved/changed over the years and we're just now realizing we can probably have the machines set up to work better. So, I have a couple of questions for "best practices" when setting up each of the machines.
    1. Since we’re in a network environment, should we not set up “local logins/users” and instead have users login using their AD login? It seems having a local account creates some conflicts with the server since upgrading to lion.
    2. Should we set the computer to not ask for a “list of users” and instead ask for a username and password for logins?
    3. For the user that uses the machine most often, they can still customize their desktop when they use an AD login, correct?
    4. Should we set up Mobile User Accounts? What exactly does this do?
    Any other advice on how we should best be setting up the clients for our environment to make sure we are following best practices would be great!
    Thanks for any help!
    Jay

    Greetings.
    We have six Macs on our Windows Server network; three iMacs and three laptops. We have set up all the machines and they are joined to the Active Directory. In the past, we have always created local users on the machines and then "browsed" to the server shares and mounted the them. We've learned things have improved/changed over the years and we're just now realizing we can probably have the machines set up to work better. So, I have a couple of questions for "best practices" when setting up each of the machines.
    1. Since we’re in a network environment, should we not set up “local logins/users” and instead have users login using their AD login? It seems having a local account creates some conflicts with the server since upgrading to lion.
    2. Should we set the computer to not ask for a “list of users” and instead ask for a username and password for logins?
    3. For the user that uses the machine most often, they can still customize their desktop when they use an AD login, correct?
    4. Should we set up Mobile User Accounts? What exactly does this do?
    Any other advice on how we should best be setting up the clients for our environment to make sure we are following best practices would be great!
    Thanks for any help!
    Jay

  • Best practice MPLS design/configuration for small service provider

    We are a small regional service provider and did not have MPLS supported on our network.  To start supporting MPLS, I’d like to get opinions and recommendations on the best practice configuration. 
    Here is what we have today –
    We have our own BGP AS and multiple /24s.
    We are running OSPF on the Cores and BGP on the Edge routers peering with ISPs.
    We peer with multiple tier-1 ISPs for internet traffic. We do not provide public transit.
    What we want for phase one MPLS implementation –
    Configure basic MPLS /vpn functionality.
    No QoS optimization required for phase 1.
    We have Cisco ME 3600X for  PE. Any recommendations will be appreciated.

    Not sure what kind of devices or routers you have in your network but looks for if you have support for labeled multicast for MVPN support. That will avoid other complexity of using other control protocols (like PIM) in core.
    PE redundancy can be obtained by BGP attributes, CE-PE connectivity can be tunned using IGP or VRRP/HSRP...
    You can have mutiple RSVP TEs for various contract traffic and you can bind various kind of traffic to different RSVP Tunnels based on contract or service with your customer.
    RSVP-TE with link/node protection design will be of great help to achieve quicker failover.

  • Best Practice after moving office and upgrading network

    Happy Friday all,
    I wanted to know what the best practice is after some pretty dramatic changes on the network. We have just moved office, replaced switches, changed Internet service and got rid of lots of stuff we didn't need any more.
    What is the best way forward to clean up the spiceworks inventory etc.
    Is it possible to just delete everything and start again.
    Short of a brand new installation (which I am leaning towards) are there any alternatives?
    Thanks in advance,
    Craig
    This topic first appeared in the Spiceworks Community

    Happy Friday all,
    I wanted to know what the best practice is after some pretty dramatic changes on the network. We have just moved office, replaced switches, changed Internet service and got rid of lots of stuff we didn't need any more.
    What is the best way forward to clean up the spiceworks inventory etc.
    Is it possible to just delete everything and start again.
    Short of a brand new installation (which I am leaning towards) are there any alternatives?
    Thanks in advance,
    Craig
    This topic first appeared in the Spiceworks Community

  • Best practices for periodic reboot of MS Windows 2003 Server

    Hi there,
    We are a 4-month old v9.3.1 Essbase environment running on a 8-CPU MS Windows 2003 Enterprise Edition server and we run data loads twice daily as well as full outline restructures overnight. Data loads are executed both via SQL retrieves (ie, from a view set up on another server) and via data file loads after clearing the outline and rebuilding it.
    Over time, I have noticed that the performance of the server was degrading markedly and that calculation scripts are taking longer. Funnily enough, the server's performance goes back to what I would expect it to be after a full reboot.
    My questions are as follows:
    1. Is it typically best practice to reboot MS Windows 2003 servers when dealing with a heavily accessed environment?
    2. If yes, is it mentioned anywhere in the Essbase manuals that MS Windows servers ought to be rebooted on a periodic basis in order to perform at their optimal best?
    3. Does Microsoft recommend such a practice of rebooting their servers on a periodic basis? I looked throughout their KnowledgeBase but couldn't find any mention of this fact in spite of the fact that it is obvious that a periodic reboot boosts the performance of MS Windows servers.
    Thanks in advance for your responses/recommendations
    J

    A bit of non essential additional info:
    We are small to midrange school district who, after close to 20 years on Novell networks, have decided to design and create a new Microsoft network and migrate all of our data and services over to the new infrastructure .   We are planning on rolling out
    2012 r2 servers with as much Hyper-v virtualization as possible.
    During the last few weeks we have been able to find most of the information we need to undergo this project, and most of the information was pretty solid with little ambiguity, except for information regarding virtualizing the DCs, which as been a bit inconsistent.
    Yes, we have read all the documents that most of these posts tend point to, but found some, if not most are still are referring to performing this under Srvr 2008 r2, and haven’t really seen all that much on Srvr2012 r2.
    We have read these and others:
    Introduction to Active Directory Domain Services (AD DS) Virtualization (Level 100), 
    Virtualized Domain Controller Technical Reference (Level 300),
    Virtualized Domain Controller Cloning Test Guidance for Application Vendors,
    Support for using Hyper-V Replica for virtualized domain controllers.
    Again, thanks for any information, best practices, cookie cutter or otherwise that you can share.
    Chas.

  • Best Practice for SUP and WSUS Installation on Same Server

    Hi Folks,
    I have a question, I am in process of deploying SCCM 2012 R2... I was in process of deploying Software Update Point on SCCM with one of the existing WSUS server installed on a separate server from SCCM.
    A debate has started with of the colleague who says that the using remote WSUS server is recommended by Microsoft because of the scalability security  that WSUS will be downloading the updates from Microsoft and SCCM should be working as downstream
    server to fetch updates from WSUS server.
    but according to my consideration it is recommended to install WSUS server on the same server where SCCM is installed... actually it is recommended to install WSUS on a site system and you can used the same SCCM server to deploy WSUS.
    please advice me the best practices for deploying SCCM and WSUS ... what Microsoft says about WSUS to be installed on same SCCM server OR WSUS should be on a separate server then the SCCM server ???
    awaiting your advices ASAP :)
    Regards, Owais

    Hi Don,
    thanks for the information, another quick one...
    the above mentioned configuration I did is correct in terms of planning and best practices?
    I agree with Jorgen, it's ok to have WSUS/SUP on the same server as your site server, or you can have WSUS/SUP on a dedicated server if you wish.
    The "best practice" is whatever suits your environment, and is a supported-by-MS way of doing it.
    One thing to note, is that if WSUS ever becomes "corrupt" it can be difficult to repair and sometimes it's simplest to rebuild the WSUS Windows OS. If this is on your site server, that's a big deal.
    Sometimes, WSUS goes wrong (not because of ConfigMgr)..
    Note that if you have a very large estate, or multiple primary site servers, you might have a CAS, and you would need a SUP on the CAS. (this is not a recommendation for a CAS, just to be aware)
    Don
    (Please take a moment to "Vote as Helpful" and/or "Mark as Answer", where applicable.
    This helps the community, keeps the forums tidy, and recognises useful contributions. Thanks!)

  • Are there any  best practice templates available to load Customer Master data into ECC using data services?

    Hi,
    As far as I remember there are best practices templates (AIO). I am not able to find the location to download these atl files though.
    Thanks,
    Pramod

    Hi pramod did you reffered this document:
    http://help.sap.com/businessobject/product_guides/sboDS41/en/sbo41_ds_sap_en.pdf
    http://events.asug.com/2011AC/4103_Legacy_Data_Migration_to_SAP_ECC.pdf

  • Best way to migrate SharePoint 2003 data into SQL Server 2008

    Hi Experts,
        I am planning to migrate data from SharePoint 2003 into SQL Server 2008. After the migration SharePoint site will be deleted in couple of months and then that data will be feed into .Net Front end application
    1) What is the best and easy way to do it?
    2) Is there any way to automate the migration process? i.e. If a new record gets entered into SharePoint it should be populated into SQL Server 2008.
    3) Any other suggestions
    Thanks,
    John

    Dear John,
                    If it's just a few lists, and you just want to import them "as-is" then it should be possible to do so ... and survive to tell the about it ;-)
                   Generally speaking, You will need to write a small process (program or script) to read/parse each list and check if the item(s) are in the target table (I assuming that there is a distinct table as target
    for each list, and that each list has 'something" you can use to distinct each row), if it's not there, the just add them according to your needs.  
                   Then just rerun the process periodically and it would keep your databases up to date (you could even set ti up to update those records that have changes, but that would delay your process significantly)
                    What i just described is doable, and not TOO complicated, it could be done i a lot different ways, and with different alternatives of programming/scripting languages. for sure you can do it in any flavor
    of .net language, and even powershell.
                    As I mentioned, this is speaking in general, the actual implementation would depend on your specific needs and the kind of data that you have/need to keep.
    Best Regards / Saludos, Marianok
    Disclaimer: This post, and all included code and information is provided "AS IS" with no warranties or guarantees and confers no rights. Try it at your own risk, I take no responsibilities.
    Aclaración: Esta publicación, y todo en código e información en la misma, es provista "AS IS" / "Como Está" sin garantía alguna y no le confiere ningún derecho. Pruebelo su propio riesgo. No asumo responsabilidad alguna.

  • Best way to share aperture library over network on two computers?

    I've read several posts about doing this and most recommend using an external HD and just hot swapping. However, is their not a better alternative? Can I not share my library over the network, or just remote connect to the other computer?
    Thanks for the help!

    Hi Terence and Leonie,
    You can run several Aperture on the same computer as long as all Aperture are using distinct libraries.
    What Aperture does is to create a simple lock file in <your Aperture library>.aplibrary/Database/apdb The lock file contains the PID of the process. So not two Aperture could open the same library (although I don't know how good they have manage the concurrent creation, locking, verification of this lock, this is a typical and difficult software engineering problem)
    Anyway, I have not said that Aperture supports concurrent editing. That a file is a database or a simple text file, you have basically the same issues on network or in concurrent environment. So if the folder structure holds simple, flat text file or xml object or databases, it is still a bunch of files (this is a UNIX system by the way ).
    Aperture contains more than one SQLlite file, there are some at the root folder, some under Database/apdb etc. there are also XML files that could be used as database.
    Anyhow, editing a file (SQLlite, XML, txt, doc, etc.) either on a share or locally is plain similar. The drive can fail too, although much less likely than the network.
    Network file edition can have a few shortcomings, but nothing that when known can be blocking.
    Locking files over the network can however be tricky and depends largely on the network protocol used (NFS, SMB/CIFS or AFP just to name a few). In addition, if you import files from /Users/foo/Photos in Aperture (without copying them within Aperture), they might not be visible from another computer or only the previews, as the path /Users/foo/Photos is not "visible" from the other computer.
    What really is a problem when you put a file not locally is the file system type and its features. HFS+ supports extended attributes (metadata), many other file systems support these but the API (how the software, such as Aperture, access this feature) might not be similar, in addition the amount of metadata or their structure might differ from file system to another. And this is where you might lose information.
    A tool like Dropbox is pretty good at maintaining the metadata across different computers, and it has been adapted to do so for OS X specifically (and also for Linux and Windows).
    The second problem is if you would have a Mac and share via SMB (the Windows share network protocol, aka CIFS on recent Windows versions) the library, SMB might not support the reading and writing of HFS+ metadata, thus data loss might occur.
    Apple is just warning us of all these shortcomings, and advise to use local storage formatted as HFS+. Because even a local external disk formatted as NTFS or other file system could be a problem.
    However, power users who understand the risks and limitations of a network share (in general) can create a disk image on a share (even if it is SMB). As for OS X, once the disk image is mounted, it is just like any other local hard disk, it is just "slower" especially latency wise. OS X supports perfectly disk image mounted on a share, it takes care of properly synchronising it.
    Of course, due to the nature of the link between the application data and where they are stored, in case of application or OS crash, the amount of data loss could be greater than when the data are held locally.
    I hope this clarify better my post.
    Note: another shortcoming of having it on the network is: Aperture lock the library by creating a file with a process ID. If the same library is opened from another computer on the network, this other Aperture instance might see the lock file and could perhaps not find the corresponding running process. This Aperture instance might then decide to ignore the lock and proceed with opening the library. This might have some consequences.
    But I haven't developed Aperture so I don't know how it would behave

  • Best practice for Admin viewing contents of network homes

    How are you viewing the contents of your users' network home directories in the gui?
    Is there a better way than logging in locally as root? I'd like to do this over AFP if possible.
    Can I make an HomeAdmins group and propogate that group to have read access to all users' home folders? How about for new homes that are subsequently created?
    Thanks,
    b.

    You probably know this already, but:
    1. Nothing bad should happen if you change the group owner of your home directories unless you're using the current group ownership for something important.
    2. If you set the setgid bit on the root directory of the sharepoint and it is owned by the admin group then new folders created within should have the group owner you want. There are various ways to ensure the home directories would have the proper permissions.

  • Best practices for setting up virtual servers on Windows Server 2012 R2

    I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state.  I
    expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
    I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
    use for Web server and Database server.
    I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
    then 2 more for the Database server in a RAID 1 configuration.
    please feel free to poke holes in this and tell me of a better way to do it.
    I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
    DCSSR

    I am creating a Web server from scratch with Windows Server 2012 R2. I expect to have a host server, and then 3 virtual servers...one that runs all of the web apps as a web server, another as a Database Server, and then on for session state.  I
    expect to use Windows Server 2012 R2 for the Web Server and Database Server, but Windows 7 for the session state.
    I have an SATA2 Intel SROMBSASMR RAID card with battery back up that I am attaching a small SSD drive that I expect to use for the session state, and an IBM Server RAID M1015 SATA3 card that I am running Intel 520 Series SSD's that I expect to
    use for Web server and Database server.
    I have some questions. I am considering using the internal USB with a flash drive to boot the Host off of, and then using two small SSD's in a Raid 0 for the Web server (theory being that if something goes wrong, session state is on a different drive), and
    then 2 more for the Database server in a RAID 1 configuration.
    please feel free to poke holes in this and tell me of a better way to do it.
    I am assuming that having the host running on a slow USB drive that is internal has no effect on the virtual servers after it is booted up, and the virtual servers are booted up?
    There are two issues about RAID0:
    1) It's not as fast as people think. So with a general purpose file system like NTFS or ReFS (choice for Windows is limited) you're not going to have any great benefits as there are very low chances whole RAID stripe would be updated @ the same time (I/Os
    need to touch all SSDs in a set so 256KB+ in a real life). Web server workload is quite far away from sequential reads or writes so RAID0 is not going to shine here. Log-structures file system (or at least some FS with logging capabilities, think about ZFS
    and ZIL enabled) *will* benefit from SSDs in RAID0 properly assigned. 
    2) RAID0 is dangerous. One lost SSD would render whole RAID set useless. So unless you build a network RAID1-over-RAID0 (mirror RAID sets between multiple hosts with a virtual SAN like or synchronous replication solutions) - you'll be sitting on a time bomb.
    Not good :)
    StarWind VSAN [Virtual SAN] clusters Hyper-V without SAS, Fibre Channel, SMB 3.0 or iSCSI, uses Ethernet to mirror internally mounted SATA disks between hosts.

  • Best Practice to troubleshoot a Crawler Issue after SQL Server restarts

    Hi Everyone
    I am after some general advice or a suggested place to start when troubleshooting crawler issues after an unplanned, out-of-sequence server restart.
    Specifically, the SQL Database in the SharePoint 2010 farm. Better yet, is there a standard practice way of resolving such issues?<o:p></o:p>
    So far articles I have found suggest options from reviewing the crawl logs, creating a new crawl component, right through to using fiddler to monitor the crawler.
    Are these sufficient places to start / methodologies to follow, what else should be considered?
    Any advice greatly appreciated.
    Thanks,
    Mike

    Well, as I said before, there are lots of different potential issues & resolutions for crawlers.  It really depends on the issue.  I would say that the base troubleshooting steps start the same no matter which service/feature you are looking
    at.  So, I'll try to keep this sort of generic, but beyond finding the details of the problem, the SOP or process will vary greatly based on what the error is.  I hope this helps, and sorry if it's not specific enough.
    1 - check the ULS logs
    2 - check the windows application logs
    3 - verify related services are running (get-spserviceinstance), possibly stop/start them to reprovision the instance on the affected server
    4 - clear the config cache (this alone will clear about 30% of your basic problems)
    6 - verify disk space & resource consumption on affected server (& SQL, SQL is always the potential to be the true "affected" server)
    7 - iisreset
    8 - verify connectivity between all servers in the farm and SP
    9 - verify requir3ed features activated
    10- check if any changes were made to environment recently (new hardware, updates to OS or apps, updates to GPOs, new solutions, etc)
    11- check if the issue is reproducible in another environment (only reliable if this is a similar environment, i.e. same patch level test farm or dr farm).  see if it occurs in other site collections within the web app, different web apps, different
    servers/browsers, etc, basically just try to nail down the scope of the problem
    There is a whole slew of thiings you could check from verifying certificates & perms, to rerunning psconfig, to checking registry keys, again I go back to it depends.  hopefully this can get you started though.  in the end ULS is where all
    the real info on the issue is going to be, so make sure you check there.  don't go in with tunnel vision either.  if you see other errors in ULS, check them out, they may or may not be related; SharePoint is an intricate product with way more moving
    parts than most systems.  fix the little quick ones that you know you can handle, this will help to keep the farm clean and healthy, as well as crossing them off the list of potential suspects for your root cause.
    Christopher Webb | MCM: SharePoint 2010 | MCSM: SharePoint Charter | MCT | http://christophermichaelwebb.com

Maybe you are looking for

  • How do you turn off the voice on a video while keeping the background music on?

    I have to watch an exercise video for training. You are supposed to be able to turn on and off the voice by going to the computer icon and toggle the voice on/off. However, that feature isn't on my ipad or ipod. How do I turn off the voice while keep

  • Unzip the file

    I need to Unzip a file from Folder 1 and place it in Folder2. Folder 1 has a zip file, which a txt file embedded in it. PI need to unzip the file and place the file in Folder2 with same name. I have created a Configuration Scenario for this, no IR ob

  • What's going on with PB 15?

    Does anybody have any feedback about PB 15?  I believe the Beta version has been out for quite some time.  It seems like I should be hearing something by now.

  • Windows Services set to Automatic in stopped state

    Hi All, On windows 2008r2 sp1 most of the services in stopped state after the reboot even that is set to automatic. if i try to start i'm getting a error message stating that "service did not start in timely fashion". I booted the computer in safe an

  • Reg : CONVERT_TO_LOCAL_CURRENCY  Fun Module

    Hi All,    I am passing the values date 31.01.2009                              foreign currency GBP                             local currency USD                            Foreign amount 100 the remainig parametrs are default. but I am not getting