Best Practices to Save Credit

Hi,
I noticed that my credit goes down even if I have all my virtual machines stopped and deallocated. What else can I do to save credit, when I'm not using it? I currently have some VMs, 1 cloud service, 1 storage, and 1 network, nothing else.
Thanks,
RP

Hi,
Please click SHUT DOWN button in azure manage portal, if the Virtual Machine show us with
Stopped (Deallocated), you will not be charged with your azure Virtual Machine, refer to
http://blogs.technet.com/b/canitpro/archive/2014/04/23/step-by-step-virtual-machine-billing-in-azure.aspx for more details.
Cloud service provides you with a DNS (yourcloudservice.cloudapp.net for example). What you get charged for is the VM and not the cloud service so if you have nothing deployed in a cloud service, you don't get charged anything.
As we know, if we create a Virtual Machine, the azure VM image (around 120GB) will store in azure storage, you would be charged for this image, refer to
http://blogs.msdn.com/b/sql_shep/archive/2013/06/10/azure-billing-per-minute-and-no-compute-charge-for-a-stopped-iaas-vm.aspx for more details. About the azure storage pricing, see details at:
http://azure.microsoft.com/en-us/pricing/details/storage/
Best Regards,
Jambor
We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
Click
HERE to participate the survey.

Similar Messages

  • Best Practice to save the contacts in the Database

    Hi everybody,
    I´m looking for some tips about the data in the Database. I would like to know what are the best practices saving a record. For example, I want to save the First Name, Second Name and the Last Name in the DB, How can I save them, I mean, the first letter in Uppercase, or all the record in lowercase.
    Ig, Austin Martin or austin martin
    What are the best way or best practice to do that??
    Can someone tell me? I would appreciate it, thx.
    Fabián Carrillo
    Siebel CRM Consultant

    Hi!
    Not quite sure what you're after here. Generally, store the data in the way it's going to be presented - sentence case in the examples you've given.
    If you're thinking along the lines of case sensitivity in querying within Siebel, then take a look at the Case Insensitivity Wizard in Siebel Bookshelf:
    http://download.oracle.com/docs/cd/E14004_01/books/UPG/UPG_DB_Upg_Util10.html
    Regards,
    mroshaw

  • Best practice to save file in the database?

    Hallo,
    in version 10.1.2 there i make a little application which allows to save a file in a table.
    the column i defined as ordsys.orddoc. than i could use the standard functionality of jdeveloper.
    is this a good way to do that, or is it better to go an other way?
    any information or idea is appreciated.

    Hi,
    There is other possibilities using Blob/Clob types. I never experienced ordsys, so i can't compare to it. But it's a possibility.
    some links:
    http://download-uk.oracle.com/docs/cd/B31036_01/doc/appdev.22/b28839/up_dn_files.htm#CJAHDJDA
    From Denes Kubicek:
    http://htmldb.oracle.com/pls/otn/f?p=31517:15
    Regards,
    Tif

  • Best Practice to save record in a table with appropriate trigger

    Hi,
    At the same time, five users are inserting client information in a client table through Oracle form. Client table has the following fields:
    CLIENT ID
    CLIENT NAME
    CLIENT ADDRESS
    CLIENT ID is generating automatically by calling a procedure. In this procedure I am using MAX function to get maximum value of CLIENT ID. After that, I store newly generated CLIENT ID in a DATA BLOCK item say :MASTER.CLIENT ID.
    The problem is that all five users do get same MAX value(suppose 40) of CLIENT ID at the same time, and oracle form will surely throw an exception at the time of insertion the record in client table. CLIENT ID is PK and it is a member of MASTER DATA BLOCK.
    I hope all above will clearly illustrate the problem, further please guide, can PRE-INSERT trigger may handle this problem efficiently ? , if so, then how ? ...
    Thanks,

    Hello,
    Welcome to the forum!
    CLIENT ID is generating automatically by calling a procedure.So, in which trigger you are calling that procedure?
    After that, I store newly generated CLIENT ID in a DATA BLOCK item say :MASTER.CLIENT ID. I would guess that you are using the code generation procedure in the WHEN-CREATE-RECORD trigger block-level. Because this trigger normally intialize the value for new record insertion. If no then specify.
    further please guide, can PRE-INSERT trigger may handle this problem efficiently ? ,Yes, PRE-INSERT will work without any problem.
    if so, then how ? ... Because, PRE-INSERT will fetch the MAX number from the table at the time of save not at the time of creation. So, suppose five user will enter record at the same time but when they will save then PRE-INSERT will get the max sequence number. And it will never create problem. Because in five user's insertion sure there will be some time difference. So, max number can be get easily without any problem.
    -Ammad

  • TechNet Wiki - Best Practice Blog Posts

    Lately, we've had some great blog posts about best practices on TechNet Wiki. So we're going to share them with you here...
    Wiki
    Life: Commenting on Comments... Care to Comment?- 10/16/14 by Ed Price
    How
    to write a great post on the Wiki - For Dummies - 10/12/14 by Gokan Ozcifci
    Wednesday
    - Wiki Life: The Importance of Longer, High-Quality Articles - 10/8/14 by Ed Price
    Wednesday
    - Wiki Life: 10 ways to become the most hated Wiki ninja on the planet - 10/1/14 by Peter Geelen
    Wiki Life:
    PowerShell PowerPack! - 9/17/14 by Matthew Yarlett
    The
    most unseen and unspoken TechNet Wiki roles: The mentor Role - 6/22/14 by Sandro Periera
    Wiki Life: Smart Tags -
    6/18/14 by Matthew Yarlett
    Wiki Life:
    Ownership and Credibility - 6/11/14 by Matthew Yarlett
    Wiki
    Life: Best Practices for building TechNet Wiki Portals - 6/4/14 by Horizon Net
    Wiki
    life: Technet Wiki tagging, the ugly truth. - 5/29/14 by Peter Geelen
    Wiki Life:
    Getting too Personal!  - 5/14/14 by Matthew Yarlett
    Wiki Life:
    YOU edited MY article??!  - 4/30/14 by Matthew Yarlett
    Wiki
    Life: Are you right in making it a rite to write? - 4/16/14 by Matthew Yarlett
    Wiki Life - Alerts -
    4/9/14 by Alan Carlos
    Wiki
    Life: Speling an gamma, it is umpotant? - 4/2/14 - by Matthew Yarlett
    Wiki
    Life: How to Translate TechNet Wiki Articles - 4/2/14 by Horizon Net 
    Wiki Life:
    Attention to Detail - 3/19/14 by Matthew Yarlett
    Wednesday - Wiki Life - Mobility - 3/12/14 by Alan Carlos
    Wiki
    Life: A Picture is Worth a 1000 Words - 3/5/14 by Matthew Yarlett
    Wiki Life: Cut'N'Paste -
    2/19/14 by Matthew Yarlett
    Wiki Life: How to Join Leadership - 2/19/14 by Horizon Net
    Wiki Life: Featured Articles in the TechNet Wiki - 2/12/14 by Durval Ramos
    Wiki Life: Code.Format() -
    2/5/14 by Matthew Yarlett
    Wiki Life: The CodePlex Corner - 2/5/14 by Horizon Net
    Did you know that we have a layout article? - 1/29/14 by Durval Ramos
    Wiki
    Life: Get to the point, keep it short! - 1/22/14 by Matthew Yarlett
    Wiki Life:
    Planning a Great Article - 1/8/14 by Matthew Yarlett
    Wiki Life: Best Practices for converting an MSDN / TechNet Forum thread into a Wiki Article!!!
    - 12/25/13 by Ed Price
    Wiki Life: Best Practices for Giving Credit - 12/18/13 by Horizon Net
    Wiki Life: How To Fix a Wiki Article TOC  - 12/4/13 by Benoit Jester
    Wiki Life: How To Detect Missing Tags Without any Effort  - 11/20/13 by Benoit Jester
    Wiki Life: How To Import an Microsoft Excel Spreadsheet Into a Wiki Article - 10/30/13 by
    Markus Vilcinskas
    Wiki Life: Cross Linking  - 10/9/13 by Horizon Net
    Wiki Life: User Groups Portal - 10/2/13 by Horizon Net
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

    Respected sensei Wiki Ninja,
    what else do you need to start a Wiki article?
    Put you signature in practice!
    So I kindly invite you all to continue your braindump over here:
    http://social.technet.microsoft.com/wiki/contents/articles/27905.technet-wiki-best-practices-blog-posts-articles.aspx
    Peter Geelen (Microsoft Belgium) - Premier Field Engineer Security & Identity
    [If a post helps to resolve your issue, please click the
    "Mark as Answer" of that post or click "Vote as helpful" button
    of that post.
    By marking a post as Answered or Helpful, you help others find the answer faster.

  • Best location to save projects in Captivate 6

    I am currently saving all of my Captivate 6 projects to my C drive.  I wanted to make sure that this is the best place to save my main projects.  Also, what is a best practice to save the backup projects (is it ok to save them on a network drive?).
    Thank you,
    Christie

    Hi Christie,
    Welcome to Adobe Forums.
    You can save your Projects anywhere on your machine as per your choice. However, it is always recommended to keep a backup or a copy of the project. You can save the backup projects anywhere (network drive or flash drive), but please make sure when you are working on your project it is copied locally to your computer. Because if you will work on the projects saved directly to the network drive you might face issues publishing them or working with them.
    Hope this helps!
    Thanks!

  • Setting Disks/Caches/Vault for multiple projects - Best Practices

    Please confirm a couple assumptions for me:
    1. Because Scratch Disk, Cache and Autosave preferences are all contained in System Settings, I cannot choose different settings for different projects at the same time (i.e. I have to change the settings upon launch of a new project, if I desire a change).
    2. It is good practice to set the Video/Render Disks to an external drive, and keep the Cache and Autosave Vault set to the primary drive (e.g. user:Documents:FCP Documents). It is also best practice to save the Project File to your primary drive.
    And a question: I see that the Autosave Vault distinguishes between projects, and the Waveform Cache Files distinguishes between clips. But what happens in the Thumbnail Cache Files folder when you have more than one project targeting that folder? Does it lump it into the same file? Overwrite it? Is that something about which I should be concerned?
    Thanks!

    maxwell wrote:
    Please confirm a couple assumptions for me:
    1. Because Scratch Disk, Cache and Autosave preferences are all contained in System Settings, I cannot choose different settings for different projects at the same time (i.e. I have to change the settings upon launch of a new project, if I desire a change).
    Yes
    2. It is good practice to set the Video/Render Disks to an external drive, and keep the Cache and Autosave Vault set to the primary drive (e.g. user:Documents:FCP Documents).
    Yes
    It is also best practice to save the Project File to your primary drive.
    I don't. And I don't think it matters. But you should back that file up to some other drive (like Time Machine).
    And a question: I see that the Autosave Vault distinguishes between projects, and the Waveform Cache Files distinguishes between clips. But what happens in the Thumbnail Cache Files folder when you have more than one project targeting that folder? Does it lump it into the same file? Overwrite it? Is that something about which I should be concerned?
    I wouldn't worry about it.
    o| TOnyTOny |o

  • Best Practice - Check/React to Conditions Before Sales Order Save

    I am saving, for discussionu2019s sake, what is essentially a sales order. 
    Component: /ALMGT/BT115H_SLSO
    View: SOHOverView
    When the 'Save' is pushed, we want to do some error checking/handling  u2013 very simple stuff, like
    If attr1 is initial and  attr2 > 0
        Issue message
       set attr_status to u2018Situation1u2019 
    endif .
    It seems like I can just redefine the eh_onsave() method, but I wanted to see if there is a more elegant or best practice methodology for doing this.
    Thanks...
    ...Mike
    Weu2019re running  FRM ( [Fundraising  Management|http://www.sap.com/services/portfolio/customdev/brochures/index.epx] ) u2013 SAP Custom Development u2013 on top of CRM 7.0

    Mike,
    Good to see you here again.  The best approach is actually to implement this logic below the UI layer in the oner order layer instead.  Based on the component it looks like you are using a business transaction, which means you can use the BADI ORDER_SAVE to trigger the error.
    Do a search in the CRM General Forum on how to use this.  Now I'm not familiar with that solution, but the techniques for manipulating transactions and using the BADI's available should be the same if the data is being saved as business transactions inside of CRM.
    Take care,
    Stephen

  • Best practice for infoview and which folder to save webi or crystal reports

    All,
    I was wondering what are your thought about the following question.Imagine you have a customer using at the same time webi reports and also crystal reports against BW.
    The thing is that he is transporting the crystal report thru SAP using the rsadmin transaction to manage his crystal reports, but also use the SAP transport to move them to PROD .As far as webi, he is using the import wizard to move them to PROD. \
    As you know the crystal reports will end up into an SAP folder .. something that is such as SAP/(description of the menu role).
    Well webi reports happen to be inside the public folder.
    The question was .. what would be the best practice
    1 u2013 store all your crystal reports against BW in the SAP menu roles as it is ending up thru the SAP transport and have the webi reports inside the public folder ?
    2 u2013 Copy your webi reports from the public folder to the SAP /Menu role folder where my crystal reports are ?
    3 u2013 copy your crystal reports from the SAP/(menu role folder) to the Public folder ?
    Let me know what is your feeling as best practice
    Thank you
    Philippe

    Just a hint:
    The path SAP/2.0 is not mandatory. You can configure the SAP BW publisher on the BW side (transaction /CRYSTAL/RPTADMIN) so that your reports will be stored in another folder on the BOE side. Please note that the addition of the role name in the path cannot be overrided.
    Regards,
    Stratos

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practices For Household IOS's/Apple IDs

    Greetings:
    I've been searching support for best practices for sharing primarily apps, music and video among multple iOS's/Apple IDs.  If there is a specific article please point me to it.
    Here is my situation: 
    We currently have 3 iPads (2-kids, 1-dad) in the household and one iTunes account on a win computer.  I previously had all iPads on single Apple ID/credit card and controlled the kids' downloads thru the Apple ID password that I kept secret.  As the kids have grown older, I found myself constantly entering my password as the kids increased there interest in music/apps/video.  I like this approach because all content was shared...I dislike because I was constantly asked to input password for all downloads.
    So, I recently set up an individual account for them with the allowance feature at iTunes that allows them to download content on their own (I set restrictions on their iPads).  Now I have 3 Apple IDs under one household.
    My questions:
    With the 3 Apple IDs, what is the best way to share apps,music, videos among myself and the kids?  Is it multiple accounts on the computer and some sort of sharing? 
    Thanks in advance...

    Hi Bonesaw1962,
    We've had our staff and students run iOS updates OTA via Settings -> Software Update. In the past, we put a DNS block on Apple's update servers to prevent users from updating iOS (like last fall when iOS 7 was first released). By blocking mesu.apple com, the iPads weren't able to check for or install any iOS software updates. We waited until iOS 7.0.3 was released before we removed the block to mesu.apple.com at which point we told users if they wanted to update to iOS 7 they could do so OTA. We used our MDM to run reports periodically to see how many people updated to iOS 7 and how many stayed on iOS 6. As time went on, just about everyone updated on their own.
    If you go this route (depending on the number of devices you have), you may want to take a look at Caching Server 2 to help with the network load https://www.apple.com/osx/server/features/#caching-server . From Apple's website, "When a user on your network downloads new software from Apple, a copy is automatically stored on your server. So the next time other users on your network update or download that same software, they actually access it from inside the network."
    I wish there was a way for MDMs to manage iOS updates, but unfortunately Apple hasn't made this feature available to MDM providers. I've given this feedback to our Apple SE, but haven't heard if it is being considered or not. Keeping fingers crossed.
    Hope this helps. Let us know what you decide on and keep us posted on the progress. Good luck!!
    ~Joe

  • Best Practice on using and refreshing the Data Provider

    I have a �users� page, that lists all the users in a table - lets call it master page. One can click on the first column to of the master page and it takes them to the �detail� page, where one can view and update the user detail.
    Master and detail use two different data providers based on two different CachedRowSets.
    Master CachedRowSet (Session scope): SELECT * FROM UsersDetail CachedRowSet (Session scope): SELECT * FROM Users WHERE User_ID=?I want the master to be updated whenever the detail page is updated. There are various options to choose from:
    1. I could call masterDataProvider.refresh() after I call the detailDataProvider.commitChanges() - which is called on the save button on the detail page. The problem with this approach is that the master page will not be refreshed across all user sessions, but only for the one saving the detail page.
    2. I could call masterDataProvider.refresh() on the preRender() event of the master page. The problem with this approach is that the refresh() will be called every single time someone views the master page. Further more, if someone goes to next page (using the built in pagination on the table on master page) and clicks on a user to view its detail and then close the detail page, it does not keep track of the pagination (what page the user was when he/she clicked on a record to view its detail).
    I can find some work around to resolve this problem, but I think this should be a fairly common usage (two page CRUD with master-detail). If we can discuss and document some best practices of doing this, it will help all the developers.
    Discussion:
    1.     What is the best practice on setting the scope of the Data Providers and CahcedRowSet. I noticed that in the tutorial examples, they used page/request scope for Data Provider but session scope for the associated CachedRowSet.
    2.     What is the best practice to refresh the master data provider when a record/row is updated in the detail page?
    3.     How to keep track of pagination, (what page the user was when he/she clicked on the first column in the master page table), so that upon updating the detail page, we cab provide user with a �Close� button, to take them back to whaterver page number he/she was.
    Thanks
    Message was edited by:
    Sabir

    Thanks. I think this is a useful information for all. Do we even need two data providers and associated row sets? Can't we just use TableRowDataProvider, like this:
    TableRowDataProvider rowData=(TableRowDataProvider)getBean("currentRow");If so, I am trying to figure out how to pass this from master to detail page. Essentially the detail page uses a a row from master data provider. Then I need user to be able to change the detail (row) and save changes (in table). This is a fairly common issue in most data driven web apps. I need to design it right, vs just coding.
    Message was edited by:
    Sabir

  • Best Practices for Integrating UC-5x0's with SBS 2003/8?

    Almost all of Cisco's SBCS market is the small and medium business space.  Most, if not all of these SMB's have a Microsoft Small Business Server 2003 or 2008. It will be critical, In order for Cisco to be considered as a purchase option, that the UC-5x0 integrates well into these networks.
    To that end, I see a  lot of talk here about how to implement parts and pieces of this, but no guidance from Cisco, no labs and no best practices or other documentation. If I am wrong, please correct me.
    I am currently stumbling through and validating these configurations myself, Once complete, I will post detailed recommendations. However, it would have been nice to have a lab to follow instead of having to learn from each mistake.
    Some of the challanges include;
    1. Where should the UC-540 be placed: As the gateway for QOS or behind a validated UC-5x0 router/security appliance combination
    2. Should the Microsoft Windows Small Business Server handle DCHP (as Microsoft's documentation says it must), or must the UC-540 handle DHCP to prevent loss of features? What about a DHCP relay scheme?
    3. Which device should handle DNS?
    My documentation (and I recommend that any Cisco Lab/Best Practice guidence include it as well) will assume the following real-world scenario, the same which applies to a majority of my SMB clients;
    1. A UC-540 device utilizing SIP for the cost savings
    2. High Speed Internet with 5 static routable IP addresses
    3. An existing Microsoft Small Business Server 2003/8
    4. An additional Line of Business Application or Terminal Server that utilizes the same ports (i.e. TCP 80/443/3389) as the UC-540 and the SBS, but on seperate routable IP's (Making up crazy non-standard port redirections is not an option).
    5. A employee who teleworks from various places that provide a seat and a network jack, which is not under our control (i.e. a employees home, a clients' office, or a telework center). This teleworker should use the built in VPN feature within the SPA or 7925G phones because we will not have administrative access to any third party's VPN/firewall.
    Your thoughs appreciated.

    Progress Report;
    The following changes have been made to the router in support of the previously detailed scenario. Everything appears to be working as intended.
    DHCP is still on the UC540 for now. DNS is being performed by the SBS 2008.
    Interestingly, the CCA still works. The NAT module even shows all the private mapped IP's, but no the corresponding public IP's. I wouldnt recommend trying to make any changes via the CCA in the NAT module.  
    To review, this configuration assumes the following;
    1. The UC540 has a public IP address of 4.2.2.2
    2. A Microsoft Small Business Server 2008 using an internal IP of 192.168.10.10 has an external IP of 4.2.2.3.
    3. A third line of business application server with www, https and RDP that has an internal IP of 192.168.10.11 and an external IP of 4.2.2.4
    First, backup your current configuration via the CCA,
    Next, telent into the UC540, login, edit, cut and paste the following to 1:1 NAT the 2 additional public IP addresses;
    ip nat inside source static tcp 192.168.10.10 25 4.2.2.3 25 extendable
    ip nat inside source static tcp 192.168.10.10 80 4.2.2.3 80 extendable
    ip nat inside source static tcp 192.168.10.10 443 4.2.2.3 443 extendable
    ip nat inside source static tcp 192.168.10.10 987 4.2.2.3 987 extendable
    ip nat inside source static tcp 192.168.10.10 1723 4.2.2.3 1723 extendable
    ip nat inside source static tcp 192.168.10.10 3389 4.2.2.3 3389 extendable
    ip nat inside source static tcp 192.168.10.11 80 4.2.2.4 80 extendable
    ip nat inside source static tcp 192.168.10.11 443 4.2.2.4 443 extendable
    ip nat inside source static tcp 192.168.10.11 3389 4.2.2.4 3389 extendable
    Next, you will need to amend your UC540's default ACL.
    First, copy what you have existing as I have done below (in bold), and paste them into a notepad.
    Then, im told the best practice is to delete the entire existing list first, finally adding the new rules back, along with the addition of rules for your SBS an LOB server (mine in bold) as follows;
    int fas 0/0
    no ip access-group 104 in
    no access-list 104
    access-list 104 remark auto generated by SDM firewall configuration##NO_ACES_24##
    access-list 104 remark SDM_ACL Category=1
    access-list 104 permit tcp any host 4.2.2.3 eq 25 log
    access-list 104 permit tcp any host 4.2.2.3 eq 80 log
    access-list 104 permit tcp any host 4.2.2.3 eq 443 log
    access-list 104 permit tcp any host 4.2.2.3 eq 987 log
    access-list 104 permit tcp any host 4.2.2.3 eq 1723 log
    access-list 104 permit tcp any host 4.2.2.3.35 eq 3389 log 
    access-list 104 permit tcp any host 4.2.2.4 eq 80 log
    access-list 104 permit tcp any host 4.2.2.4 eq 443 log
    access-list 104 permit tcp any host 4.2.2.4 eq 3389 log
    access-list 104 permit udp host 116.170.98.142 eq 5060 any
    access-list 104 permit udp host 116.170.98.143 any eq 5060
    access-list 104 deny   ip 10.1.10.0 0.0.0.3 any
    access-list 104 deny   ip 10.1.1.0 0.0.0.255 any
    access-list 104 deny   ip 192.168.10.0 0.0.0.255 any
    access-list 104 permit udp host 116.170.98.142 eq domain any
    access-list 104 permit udp host 116.170.98.143 eq domain any
    access-list 104 permit icmp any host 4.2.2.2 echo-reply
    access-list 104 permit icmp any host 4.2.2.2 time-exceeded
    access-list 104 permit icmp any host 4.2.2.2 unreachable
    access-list 104 permit udp host 192.168.10.1 eq 5060 any
    access-list 104 permit udp host 192.168.10.1 any eq 5060
    access-list 104 permit udp any any range 16384 32767
    access-list 104 deny   ip 10.0.0.0 0.255.255.255 any
    access-list 104 deny   ip 172.16.0.0 0.15.255.255 any
    access-list 104 deny   ip 192.168.0.0 0.0.255.255 any
    access-list 104 deny   ip 127.0.0.0 0.255.255.255 any
    access-list 104 deny   ip host 255.255.255.255 any
    access-list 104 deny   ip host 0.0.0.0 any
    access-list 104 deny   ip any any log
    int fas 0/0
    ip access-group 104 in
    Lastly, save to memory
    wr mem
    One final note - if you need to use the Microsoft Windows VPN client from a workstation behind the UC540 to connect to a VPN server outside your network, and you were getting Error 721 and/or Error 800...you will need to use the following commands to add to ACL 104;
    (config)#ip access-list extended 104
    (config-ext-nacl)#7 permit gre any any
    Im hoping there may be a better way to allowing VPN clients on the LAN with a much more specific and limited rule. I will update this post with that info when and if I discover one.
    Thanks to Vijay in Cisco Tac for the guidence.

  • Best Practices for Using Photoshop (and Computing in General)

    I've been seeing some threads that lead me to realize that not everyone knows the best practices for doing Photoshop on a computer, and in doing conscientious computing in general.  I thought it might be a good idea for those of us with some exprience to contribute and discuss best practices for making the Photoshop and computing experience more reliable and enjoyable.
    It'd be great if everyone would contribute their ideas, and especially their personal experience.
    Here are some of my thoughts on data integrity (this shouldn't be the only subject of this thread):
    Consider paying more for good hardware. Computers have almost become commodities, and price shopping abounds, but there are some areas where spending a few dollars more can be beneficial.  For example, the difference in price between a top-of-the-line high performance enterprise class hard drive and the cheapest model around with, say, a 1 TB capacity is less than a hundred bucks!  Disk drives do fail!  They're not all created equal.  What would it cost you in aggravation and time to lose your data?  Imagine it happening at the worst possible time, because that's exactly when failures occur.
    Use an Uninterruptable Power Supply (UPS).  Unexpected power outages are TERRIBLE for both computer software and hardware.  Lost files and burned out hardware are a possibility.  A UPS that will power the computer and monitor can be found at the local high tech store and doesn't cost much.  The modern ones will even communicate with the computer via USB to perform an orderly shutdown if the power failure goes on too long for the batteries to keep going.  Again, how much is it worth to you to have a computer outage and loss of data?
    Work locally, copy files elsewhere.  Photoshop likes to be run on files on the local hard drive(s).  If you are working in an environment where you have networking, rather than opening a file right off the network, then saving it back there, consider copying the file to your local hard drive then working on it there.  This way an unexpected network outage or error won't cause you to lose work.
    Never save over your original files.  You may have a library of original images you have captured with your camera or created.  Sometimes these are in formats that can be re-saved.  If you're going to work on one of those files (e.g., to prepare it for some use, such as printing), and it's a file type that can be overwritten (e.g., JPEG), as soon as you open the file save the document in another location, e.g., in Photoshop .psd format.
    Save your master files in several places.  While you are working in Photoshop, especially if you've done a lot of work on one document, remember to save your work regularly, and you may want to save it in several different places (or copy the file after you have saved it to a backup folder, or save it in a version management system).  Things can go wrong and it's nice to be able to go back to a prior saved version without losing too much work.
    Make Backups.  Back up your computer files, including your Photoshop work, ideally to external media.  Windows now ships with a quite good backup system, and external USB drives with surprisingly high capacity (e.g., Western Digital MyBook) are very inexpensive.  The external drives aren't that fast, but a backup you've set up to run late at night can finish by morning, and if/when you have a failure or loss of data.  And if you're really concerned with backup integrity, you can unplug an external drive and take it to another location.
    This stuff is kind of "motherhood and apple pie" but it's worth getting the word out I think.
    Your ideas?
    -Noel

    APC Back-UPS XS 1300.  $169.99 at Best Buy.
    Our power outages here are usually only a few seconds; this should give my server about 20 or 25 minutes run-time.
    I'm setting up the PowerChute software now to shut down the computer when 5 minutes of power is left.  The load with the monitor sleeping is 171 watts.
    This has surge protection and other nice features as well.
    -Noel

  • What is the best practice for changing view states?

    I have a component with two Pie Charts that display
    percentages at two specific dates (think start and end values).
    But, I have three views: Start Value only, End Value only, or show
    Both. I am using a ToggleButtonBar to control the display. What is
    the best practice for changing this kind of view state? Right now
    (since this code was inherited), the view states are changed in an
    ActionScript function which sets the visible and includeInLayout
    properties on each Pie Chart based on the selectedIndex of the
    ToggleButtonBar, but, this just doesn't seem like the best way to
    do this - not very dynamic. I'd like to be able to change the state
    based on the name of the selectedItem, in case the order of the
    ToggleButtons changes, and since I am storing the name of the
    selectedItem for future reference.
    Would using States be better? If so, what would be the best
    way to implement this?
    Thanks.

    I would stick with non-states, as I have always heard that
    states are more for smaller components that need to change under
    certain conditions, like a login screen that changes if the user
    needs to register.
    That said, if the UI of what you are dealing with is not
    overly complex, and if it will not become overly complex, maybe
    states is the way to go.
    Looking at your code, I don't think you'll save much in terms
    of lines of code.

Maybe you are looking for

  • Shell program and XML Publisher E-Text Template

    Hi All, We have a customer submitting an rdf report through Shell script using the command line submission ar60run. This is being done in 11i. This rdf handles the File transmission with certain codes for the third party(same as E-Text templates in R

  • Error while creating the Organization structure.

    Hi, I am trying to create an Organization structure in SRM 5 sytem, I have created a Root for the structure and when i try to click on the Attribute Inheritance or the Check tab below i am getting an error "Start program BBP_LOCATIONS_GET_ALL first (

  • SQL*Loader- Records Rejected - Error on table ORA-01722: invalid number

    Getting the following errors : Please tell me where I am going wrong? Attached is the log file and snippets of datafile along with the control file !! Also please direct me how can i upload 4900 records at one go? SQL*Loader: Release 11.1.0.7.0 - Pro

  • Seemingly arbitrary flashes do not load.

    First off, I made this account specifically for bringing up the issue I am having. Now, onto the issue itself:  For some Flashes, it will simply not load.  I'll get a white screen, and right-clicking it will only yield "About Adobe Flash 15.0.0.189"

  • Select..... from table...... having max date

    I'm trying to select the row from a table that has the maximum date. For example, lets say I want the name of the most recently created object in dba_objects. I can do this using a sub-query, but surely there must be an easier way, and querying this