Care of SSD, SSD Best practice hints

I haven't seen a good post on this, for those of us with SSD's, what are some of the best practices for care of the SSD. Should we repartition and re-install from time to time? Should we reformat and attach to a windows PC and run some of the windows utilities? Anyone have some tips on keeping the drive fast and working as long as possible?
I'll start with one I know
Don't use PGP WDE (Pretty Good Privacy Whole Disk Encryption) with SSD. Due to PGP WDE writing to every sector each write has to first erase the data then write it, that combined with encryption overhead ends up with the SSD with non-SSD speed. If you have to use PGP WDE you might as well use a normal platter drive.

I keep large data (like downloaded files) off on a hard drive.

Similar Messages

  • Moving Mailserver from Xserve G4 to intel, Best practice?, Recommendations?

    Hi!
    I will receive a new Xserve intel soon and possibly want to move mail services from the currently used Xserve G4 (which is working fine) to the new Xserve intel.
    The Xserve G4 is running a heavily modified mail setup thanks to pterobyte's excellent tutorials on fixing, updating, extending, dare I say "pimping" Mac OS X Server's mailserver setup.
    What I want to achieve in the long run:
    Have Mail services run on the Xserve intel and have the Xserve G4 work as a mailbackup. (They will be connected via permanent VPN, but be in different LANs on different ISPs). They shall be serving email for at least three distinct domains then. (All low volume. currently the G4 is serving a single domain using WGM aliases.) I want (and need) to switch to postfix aliases.
    What I need to consider:
    My client desperately wants/needs to update to Leopard server once it becomes available. Both Xserve definitely will be upgraded to Leopard Server then.
    Time is not an issue at the moment as the G4 is working very well. I want to keep the work at a minimum in regard to the Leopard switch. I am fine with an interim solution, even if it is somewhat inelegant, as long as it runs fine. The additional domains are not urgent at the moment. It will be fine when they transfer to the intel Xserve once we run Leopard.
    Questions:
    Does it pay to do all the work moving from the G4 to the intel (I'd need to compile and configure all the SpamAssassin, ClamAV, Amavisd-New, etc. again...) move all the Mailboxes, Users, IMAP and SMTP. In regard that there will be a clean install once Leopard comes out. (I am definitely no fan of Updating a Mac OS X server. Experience has proven to me that this does not work reliably.)
    Are there any recommendations or best practice hints from your experience when moving a server from PPC to intel?
    Thanks in advance
    MacLemon

    By all means do a clean install. If time is not an issue, make sure Leopard has been on the market 2-3 months before you do so.
    Here is what I would do:
    1. Clean install of Intel Server
    2. Update all components
    3. Copy all needed configuration files from PPC to Intel Server
    4. Backup PPC mail server with mailbfr
    5. Restore mail backup with mailbfr to Intel Server
    This is all that needs to be done.
    If you want to keep the G4 as a backup server, just configure it as a secondary MX in case your primary is down. Trying to keep mailboxes redundant is only possible in a cluster and a massive pain to configure (Leopard should change that though).
    HTH,
    Alex

  • Best practice - material staging for production order

    Hi Experts,
    could any of You pls, support me with some hints of best practice how to handle material staging WM-PP interface in a certain case?
    Up till now we had a system, where production had no separate location in IM, but one location existed including raw material wh and production. In the same time in WM we had separate storage types for production and raw materials u2013 hence we did material staging transferring goods only inside one IM location between different WM storage types. The material staging should be done based on separate prd. orders.
    Now this need to be changed and separate location need to be handled in IM for production u2013 which means the staging should be done between different IM locations and WM administration also need to be handled.
    Up till now we used LP10 for staging, then LB13 for TO creation etc. We can keep going like that, but if do so, there is another step required in IM u2013 movement 311, where material numbers and qty need to be added manually to finish the whole procedure. I would like to avoid this u2013 which makes the administrational procedure quite long.
    I have been checking the following possibilities:
    1.     Set released order parts-staging at control cycle and use MF60 for staging u2013 but I can not select requirements based on pro ordders here (only able to find demand if component including into selection)
    2.     Two step transfer 313/315 u2013 but this not a supported procedure u2013 313 TI /TO / 315
    3.     Try to find solution how to create 311 movement based on TO or based on WM stock at certain storage type / dynamic bin.
    I have failed.
    So, could any of You pls, support me with some useful ideas, how to handle material staging where 311 included and definetly the last step of procedure, but administrator does not need to enter items manually one by one in MIGO.
    All answers will be appreciated

    Hi,
    Storage location control should be able to take care of your problem.
    If you want to stage the material to a different IM location then the WM location then make the following settings
    If location xxxx is your WM location and location yyyy is your Production location.
    You have defined Production storage type ZZZ for production storage location YYYY and have maintained the supply area for the same
    In WM configuration - For interfaces - IM interface-Control of Assignment "Plant / Stor.Loc. - Whse Number"
    Assign location XXXX as the Standard Location. Maintain entry donot copy sloc in TR for location YYYY
    In WM configuration - For interfaces - IM interface-  Storage Location control for WH
    This entry ensures that there will be a WM tarnsfer Posting between your WM and Production storage Location automatically when you confirm your TO. You can have this done via a btach job also if you want cumulative posting. (schedule job RLLQ0100)

  • Best practice for exporting from iMovie '08 to iDVD

    I am looking to find out what is the best practice for exporting from iMovie '08 to iDVD. I have read the other postings that give the basic howto (export to Media Browser then select the video in iDVD). However, my question is a little more technical. I have 1080i HD projects. I am interested in burning them to DVD in the best possible quality. What setting should I be using when I publish to Media Browser?
    I am wondering about quality loss due to more than one conversion/compression. I suspect that when I export to the Media Browser then this is occurring. If I am not mistaken iMovie is using something like H.264 for this. Then, when I run iDVD I suspect it will it do another conversion/compression, I think to get to MPEG2. Not only could this result in a loss of quality but also it will take extra time. I am interested to know what others think about this.
    Finally, I am looking to create DVDs for a lot of video. I am wondering if there are any USB or firewire hardware devices out there that could speed up the compression. I use the Elgato Turbo.264 when I want to encode to H.264 but I wonder if there is something similar for DVD creation.
    Thanks in advance.

    the standards for videoDVD are 720x480, and usually mpeg2 encoded..
    so, your HiDef project HAS to be 'downsampled' somehow..
    I would Export with Qucktime/apple intermediate => which is the 'format' your project is allready, and you avoid any useless 'inbetween encoding'..
    iDVD will 'swallow' this huge export file - don't mind: iDVD cares for length, not size.
    iDVD will then convert into DVD-standards..
    you can 'raise' quality, by using projects <60min - this sets iDVD automatically to highest technical possible bitrate
    hint: judge pic quality on a DVDplayer + TV.. not on your computer (DVDs are meant for TVdelivery)

  • Best Practice - Hardware requirements for exchange test environment

    Hi Experts,
    I'm new to exchange and I want to have a test environment for learning, testing ,batches and updates.
    In our environment we have co-existence 2010 and 2013 and I need to have a close scenario on my test environment.
    I was thinking of having an isolated (not domain joined) high end workstation laptop with (quad core i7, 32GB RAM, 1T SSD) to implement the environment on it, but the management refused and replied "do it on one of the free servers within the live production
    environment at the Data Center"... !
    I'm afraid of doing so not to corrupt the production environment with any mistake by my configuration "I'm not that exchange expert who could revert back if something wrong happened".
    Is there a documented Microsoft recommendation on how to do it and where to do so to be able to send it to them ??
    OR/ Could someone help with the best practice on where to have my test environment and how to set it up??
    Many Thanks
    Mohamed Ibrahim

    I think this may be useful:
    It's their official test lab set up guide.
    http://social.technet.microsoft.com/wiki/contents/articles/15392.test-lab-guide-install-exchange-server-2013.aspx
    Also, your spec should be fine as long as you run the VMs within their means.

  • Windows 2012 R2 File Server Cluster Storage best practice

    Hi Team,
    I am designing  Solution for 1700 VDi user's . I will use Microsoft Windows 2012 R2 Fileserver Cluster to host their Profile data by using Group Policy for Folder redirection.
    I am looking best practice to define Storage disk size for User profile data . I am looking to have Single disk size of 30 TB to host user Profile data .Single disk which will spread across two Disk enclosure .
    Please let me know if if single disk of 30 Tb can become any bottle neck to hold user active profile data .
    I have SSD Writable disk in storage with FC connectivity.
    Thanks
    Ravi

    Check this
    TechEd session,
    the
    Windows Server 2012 VDI deployment Guide (pages 8,9), and 
    this article
    General considerations during volume size planning:
    Consider how long it will take if you ever have to run chkdsk. Chkdsk has gone significant improvements in 2012 R2, but it will still take a long time to run against a 30TB volume.  That's down time..
    Consider how will volume size affect your RPO, RTO, DR, and SLA. It will take a long time to backup/restore a 30 TB volume. 
    Any operation on a 30TB volume like snapshot will pose performance and additional disk space challenges.
    For these reasons many IT pros choose to keep volume size under 2TB. In your case, you can use 15x 2TB volumes instead of a single 30 TB volume. 
    Sam Boutros, Senior Consultant, Software Logic, KOP, PA http://superwidgets.wordpress.com (Please take a moment to Vote as Helpful and/or Mark as Answer, where applicable) _________________________________________________________________________________
    Powershell: Learn it before it's an emergency http://technet.microsoft.com/en-us/scriptcenter/powershell.aspx http://technet.microsoft.com/en-us/scriptcenter/dd793612.aspx

  • Best practice for migrating data tables- please comment.

    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    Please comment on your view of this practice. Thanks!

    >
    Please comment on your view of this practice. Thanks!
    >
    Sounds like the DBAs are using best practices to get the job done. Congratulations to them!
    >
    I have 5 new tables seeded with data that need to be promoted from a development to a production environment.
    Instead of the DBAs just using a tool to migrate the data they are insistent that I save and provide scripts for every single commit, in proper order, necessary to both build the table and insert the data from ground zero.
    >
    The process you describe is what I would expect, and require, in any well-run environment.
    >
    I am very unaccustomed to this kind of environment and it seems much riskier for me to try and rebuild the objects from scratch when I already have a perfect, tested, ready model.
    >
    Nobody cares if if is riskier for you. The production environment is sacred. Any and all risk to it must be reduced to a minimum at all cost. In my opinion a DBA should NEVER move ANYTHING from a development environment directly to a production environment. NEVER.
    Development environments are sandboxes. They are often not backed up. You or anyone else could easily modify tables or data with no controls in place. Anything done in a DEV environment is assumed to be incomplete, unsecure, disposable and unvetted.
    If you are doing development and don't have scripts to rebuild your objects from scratch then you are doing it wrong. You should ALWAYS have your own backup copies of DDL in case anything happens (and it does) to the development environment. By 'have your own' I mean there should be copies in a version control system or central repository where your teammates can get their hands on them if you are not available.
    As for data - I agree with what others have said. Further - ALL data in a dev environment is assumed to be dev data and not production data. In all environments I have worked in ALL production data must be validated and approved by the business. That means every piece of data in lookup tables, fact tables, dimension tables, etc. Only computed data, such as might be in a data warehouse system generated by an ETL process might be exempt; but the process that creates that data is not exempt - that process and ultimately the data - must be signed off on by the business.
    And the business generally has no access to, or control of, a development environment. That means using a TEST or QA environment for the business users to test and validate.
    >
    They also require extensive documentation where every step is recorded in a document and use that for the deployment.
    I believe their rationale is they don't want to rely on backups but instead want to rely on a document that specifies each step to recreate.
    >
    Absolutely! That's how professional deployments are performed. Deployment documents are prepared and submitted for sign off by each of the affected groups. Those groups can include security, dba, business user, IT and even legal. The deployment documents always include recovery steps so that is something goes wrong or the deployment can't procede there is a documented procedure of how to restore the system to a valid working state.
    The deployments themselves that I participate in have representatives from the each of those groups in the room or on a conference call as each step of the deployment is performed. Your 5 tables may be used by stored procedures, views or other code that has to be deployed as part of the same process. Each step of the deployment has to be performed in the correct order. If something goes wrong the responsible party is responsible for assisting in the retry or recovery of their component.
    It is absolutely vital to have a known, secure, repeatable process for deployments. There are no shortcuts. I agree, for a simple 5 new table and small amount of data scenario it may seem like overkill.
    But, despite what you say it simply cannot be that easy for one simple reason. Adding 5 tables with data to a production system has no business impact or utility at all unless there is some code, process or application somewhere that accesses those tables and data. Your post didn't mention the part about what changes are being made to actually USE what you are adding.

  • Best Practice for Designing Database Tables?

    Hi,
    I work at a company for tracking devices (GPS Devices). Our SQL Server database is designed to have a table for each device we sell, currently there is 2500 tables in our database and they all have the same columns they only differ in table name. Each device
    sends about 4K records per day.
    currently each table hold from 10K records to 300K records
    What is the best practice to design a database in this situation? 
    When accessing database from a C# application, which is better to use, direct SQL commands or views? 
    a detailed description about what is best to do in such scenario would be great. 
    Thanks in advance.
    Edit:
    Tables columns are:
    [MessageID]
          ,[MessageUnit]
          ,[MessageLong]
          ,[MessageLat]
          ,[MessageSpeed]
          ,[MessageTime]
          ,[MessageDate]
          ,[MessageHeading]
          ,[MessageSatNumber]
          ,[MessageInput]
          ,[MessageCreationDate]
          ,[MessageInput2]
          ,[MessageInput3]
          ,[MessageIO]

    Hello Louis, thank you so much for your informative post. I'll describe in detail what situations I came through my 9 months of work in the company (working as a software engineer, but I am planning to take over database maintenance since no one is maintaining
    it right now and I cannot do anything else in the code to make it faster)
    At every end of the month our clients generate report for the previous month for all their cars, some clients have 100+ cars, and some have few. This is when real issue start, they are calling their data from our server through internet while having 2000
    unit sending data to our server, they keep on getting read time out since SQL Server gives priority to insert and hold all select commands. I solved it temporary in the code using "Read Uncommitted" once I initialize a connection through C#. 
    The other issue is generating reports for a month or two takes lots of time when selecting 100+ units. Thats what I want to solve, the problem is the one who wrote the C# app used hard coded SQL Statements
    AND
    the company is refusing to upgrade from SQL Server 2003 and Windows Server 2003. 
    Now talking about reports, there are summary reports, stops reports, zone reports ..etc most of them depend usually on at least MessageTime, MessageDate, MessageSpeed, MessageIO and MessageSatNumber.
    So from your post I conclude that for now I need to set snapshots so that select statements don't get kicked out in favor for insert commands, but does SQL Server automatically select from the snapshots or do I have to tell it to do so? 
    Other than proper indexing what else I need? Tom
    Phillips suggested Table partitioning but I don't think it is needed in my case since our database size is 78GB
    When I run code analysis on the app, Visual Studio tells me I better use stored procedures, views than using hard coded Select Statements, what difference will this bring me when talking about performance?
    Thanks in advance. 

  • IPhone Best Practices - A Work In Progress

    Hello all. I've been tasked with introducing my coworkers into the inner workings of the iPhone, and there are a good number of pointers that I find myself saying over and over again. I'd like to share my best practices with everyone, as well as collect more pointers and opinions from the community at large.
    Care and Handling:
    First - wash your hands, often. Now I know we all do this often anyway, but I'd like to point out that a healthy amount of hand washing will really go a long way to keep your iPhone screen smudge free. The worst offender, unfortunately, is doughnuts. A small layer of sugar will render that area un-tappable, without any real indication that it has done so. If you are frantically tapping the screen on the iPod button and nothing is happening, clean your phone before you do a hard reset.
    Second - Pockets. Keeping your phone in your front pocket is natural and what most of us do. In these summer months, however, keeping your phone in a sweaty front pocket can do a good deal to the dirt level of the screen. If you find yourself cleaning your phone constantly, try a belt clip.
    Lastly - Battery Life. Your iPhone's battery life is in your hands, literally. Being aware of your power consumption and planning accordingly is going to be infinitely more important that the battery's native charge-holding ability. This goes especially for the day of purchase - as tempting as it may be to open the box and activate, immediately running around the house watching YouTube, it is best to let the phone charge for 12 hours before use. Charging the phone every night is an absolute must, skipping a day will kill the battery life as your ride the bottom edge the following day. Most of us have access to a USB port while we're at work, best idea will be to plug in your phone when you sit down at your desk.
    iPod:
    Large Libraries: In the opening weekend, I got many complaints that you cannot manually manage your music. There is a workaround that has made me change the way I work with all of my iPods: the iPhone specific playlist. Simply create a playlist with all of the music you wish to put on your phone and sync that one playlist. This also helps with sync time - you have a start sync and an end sync, not a constant sync all throughout your music management, slowing your computer down in the process.
    TV Shows: I watch a lot of MST3K, which I have organized into iTunes as TV shows, split into seasons, the works. The problem that has arisen, therefore, is the one of selective synchronization - you cannot specifically select the TV show you want to sync to the device, instead getting the choices to sync all, unwatched, or latest shows. This is problematic when each show is 700MB large. Here's the work around - select all of the episodes of a specific show and right click, selecting "Mark as Not New", removing all of the little blue dots from the episodes. Select the one, three, or five episodes, and right click them, selecting "Mark as New", then sync the last one, three, or five unwatched episodes. The shows you selected will sync.
    iPhoto:
    Many users are complaining that iPhoto opens whenever the phone is connected. This is not a preference of the phone, but rather iPhoto. Remember when you first launched iPhoto and it asked you if you wanted to use iPhoto whenever your camera was attached? iPhoto is detecting that your phone is a camera and launching, just as you told it to do.
    Mail:
    POP accounts - too many unread messages: When first adding a POP account, all of the messages downloaded to the phone arrive as unread. Tapping a message, tapping back, and then tapping the next message can get tedious. Here's the workaround - tap the small down arrow to the upper right hand side of the screen, watching closely to the number next to Inbox. When that number goes down by one, tap the arrow again. If that number hasn't gone down yet, wait a sec, and do not try to tap tap tap tap tap, you'll flood the input queue and crash Mail.
    Syncing Mail accounts - All too often people blame the iPhone when their mail does not work. A perfect test is sync you accounts from Mail. If they work in mail, they'll work on the phone, if they are unreliable in Mail, they will also be unreliable on the phone. The Mail client on the iPhone is just as powerful as any other mail client in terms of how it connects to mail servers, if you are having problems you need to check your settings before blaming the hardware. If you prefer to leave your install of Mail.app alone, create a new user account on your Mac, set up all of the accounts you want there, and use iTunes to sync that data to the phone. Make sure to remove that portion of sync from your actual user account's instance of iTunes, however, or it will all sync back.
    This message has not been downloaded from the server: This message has snagged a couple users, but upon investigation, these users have filled their iPhones to the absolute brim with music and video. It hasn't been downloaded from the server because there is no space to download to - this also applies to the Camera application dumping to the Home screen. Because there is no space, it can't add any new data. Make some room, then be patient as the mail client gets to that message in cleanup (often a sync or reboot will clear it up).
    Safari:
    Safari and iPod: Many users have reported iPod stopping in the middle of browsing, often pouting and pursing their lips crying, "This is terrible, I can't even browse the web and listen to music at the same time?". I then check their phone, and lo and behold they have upwards of eight separate pages open at the same time. This device (like every other computer out there) has a finite amount of memory, each page taking up a significant portion depending on how busy the page is. I've routinely gotten through entire albums while browsing through Safari, but I've got one page open in total, and it's usually mostly text. Keep it to one or two pages open and iPod will run forever if you let it.
    Web Apps: "This web app is terrible, it keeps booting me to Home!" When was your last reboot? How many other pages are open? In the same vein as Safari and iPod, Web Apps need a good deal of breathing room - give it to them. Close down other pages, stop iPod, or even reboot. Give the app a clean slate and it will perform, every time. iPhoneRemote users will attest to this.
    iCal:
    Multiple Calendars - Default Calendar: When adding a new appointment, it adds to the default calendar. Appointments can't be shunted to the correct calendar until after sync anyway, so create an "iPhone" calendar and make that the default. Because it's in that calendar, you'll know enough to move it to the appropriate calendar after sync.
    Please feel free to add your own best practices, and ask questions, too.

    is there any application you can get for the iphone to enlarge text and phone numbers ?
    If included with an email or on a website, yes with no application needed.
    If you are referring to the text size for your iPhone's contact list, no.
    can you insert a phone number from your contact list into a text message ?
    No.
    i cant seem to figure it out, does the alarm clock work if you turn off the phone at night,
    No - powered off with the iPhone means powered off. Any phone that provides for this is not powered off - it is in deep sleep or deep standby mode, which the iPhone does not support. If you don't want your phone ringing or don't want to receive SMS at night but you want to use the iPhone's alarm feature as a wake-up alarm, you can turn on Airplane Mode before going to bed, which will also conserve the battery if your iPhone is not plugged in at night.
    can you send a multi media text message ?
    No.

  • TechNet Wiki - Best Practice Blog Posts

    Lately, we've had some great blog posts about best practices on TechNet Wiki. So we're going to share them with you here...
    Wiki
    Life: Commenting on Comments... Care to Comment?- 10/16/14 by Ed Price
    How
    to write a great post on the Wiki - For Dummies - 10/12/14 by Gokan Ozcifci
    Wednesday
    - Wiki Life: The Importance of Longer, High-Quality Articles - 10/8/14 by Ed Price
    Wednesday
    - Wiki Life: 10 ways to become the most hated Wiki ninja on the planet - 10/1/14 by Peter Geelen
    Wiki Life:
    PowerShell PowerPack! - 9/17/14 by Matthew Yarlett
    The
    most unseen and unspoken TechNet Wiki roles: The mentor Role - 6/22/14 by Sandro Periera
    Wiki Life: Smart Tags -
    6/18/14 by Matthew Yarlett
    Wiki Life:
    Ownership and Credibility - 6/11/14 by Matthew Yarlett
    Wiki
    Life: Best Practices for building TechNet Wiki Portals - 6/4/14 by Horizon Net
    Wiki
    life: Technet Wiki tagging, the ugly truth. - 5/29/14 by Peter Geelen
    Wiki Life:
    Getting too Personal!  - 5/14/14 by Matthew Yarlett
    Wiki Life:
    YOU edited MY article??!  - 4/30/14 by Matthew Yarlett
    Wiki
    Life: Are you right in making it a rite to write? - 4/16/14 by Matthew Yarlett
    Wiki Life - Alerts -
    4/9/14 by Alan Carlos
    Wiki
    Life: Speling an gamma, it is umpotant? - 4/2/14 - by Matthew Yarlett
    Wiki
    Life: How to Translate TechNet Wiki Articles - 4/2/14 by Horizon Net 
    Wiki Life:
    Attention to Detail - 3/19/14 by Matthew Yarlett
    Wednesday - Wiki Life - Mobility - 3/12/14 by Alan Carlos
    Wiki
    Life: A Picture is Worth a 1000 Words - 3/5/14 by Matthew Yarlett
    Wiki Life: Cut'N'Paste -
    2/19/14 by Matthew Yarlett
    Wiki Life: How to Join Leadership - 2/19/14 by Horizon Net
    Wiki Life: Featured Articles in the TechNet Wiki - 2/12/14 by Durval Ramos
    Wiki Life: Code.Format() -
    2/5/14 by Matthew Yarlett
    Wiki Life: The CodePlex Corner - 2/5/14 by Horizon Net
    Did you know that we have a layout article? - 1/29/14 by Durval Ramos
    Wiki
    Life: Get to the point, keep it short! - 1/22/14 by Matthew Yarlett
    Wiki Life:
    Planning a Great Article - 1/8/14 by Matthew Yarlett
    Wiki Life: Best Practices for converting an MSDN / TechNet Forum thread into a Wiki Article!!!
    - 12/25/13 by Ed Price
    Wiki Life: Best Practices for Giving Credit - 12/18/13 by Horizon Net
    Wiki Life: How To Fix a Wiki Article TOC  - 12/4/13 by Benoit Jester
    Wiki Life: How To Detect Missing Tags Without any Effort  - 11/20/13 by Benoit Jester
    Wiki Life: How To Import an Microsoft Excel Spreadsheet Into a Wiki Article - 10/30/13 by
    Markus Vilcinskas
    Wiki Life: Cross Linking  - 10/9/13 by Horizon Net
    Wiki Life: User Groups Portal - 10/2/13 by Horizon Net
    Ed Price, Azure & Power BI Customer Program Manager (Blog,
    Small Basic,
    Wiki Ninjas,
    Wiki)
    Answer an interesting question?
    Create a wiki article about it!

    Respected sensei Wiki Ninja,
    what else do you need to start a Wiki article?
    Put you signature in practice!
    So I kindly invite you all to continue your braindump over here:
    http://social.technet.microsoft.com/wiki/contents/articles/27905.technet-wiki-best-practices-blog-posts-articles.aspx
    Peter Geelen (Microsoft Belgium) - Premier Field Engineer Security & Identity
    [If a post helps to resolve your issue, please click the
    "Mark as Answer" of that post or click "Vote as helpful" button
    of that post.
    By marking a post as Answered or Helpful, you help others find the answer faster.

  • [XI 3.1] BEST PRACTICE method of Oracle connection for RPTs on Linux

    Business Objects XI (3.1) - SP3.
    Running on Red Hat Enterprise Linux OS.
    7,000+ Crystal Reports 2008 *.rpt objects ONLY (No Universe / No WebI).
    All reports connecting to Oracle 10g databases.
    ==================
    In the past, all of this infrastructure was running on Windows Server OS and providing the database access via a Named ODBC connection (eg. "APP_DATA".)
    This made it easy to manage as all the Report Developers had a standard System DSN called "APP_DATA" which was the same as the System DSN name on all of our DEV, TEST/UAT, and PROD servers for Business Objects.
    When we wanted to move/promote a *.rpt file from DEV to PROD we did not have to change any "Database Connection" info as it was all taken care of by pointing the System DSN called "APP_DATA" a a different physical Oracle server at the ODBC level.
    Now, that hardware is moving from Windows OS to Red Hat Linux and we are trying to determine the Best Practices (and Pros/Cons) of using one of the three methods below to access the Oracle database for our *.rpts....
    1.) Oracle Native connection
    2.) ODBC connection
    3.) JDBC connection
    Here's what we have determined so far -
    1a.) Oracle Native connection should be the most efficient method of passing SQL-query to the DB with the fewest issues and best speed [PRO]
    1b.) Oracle Native connection may not be supported on Linux - http://www.forumtopics.com/busobj/viewtopic.php?t=118770&view=previous&sid=9cca754b468fc67888ab2553c0fbe448 [CON]
    1c.) Using Oracle Native would require special-handling on the *.rpts at either the source-file or the CMC level to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    2a.) A 3rd-Party Linux ODBC option may be available from EasySoft - http://www.easysoft.com/products/data_access/odbc_oracle_driver/index.html - which would allow us to use a similar Developer / Admin overhead to what we are used to. [PRO]
    2b.) Adding a 3rd-Party Vendor into the mix may lead to support issues is we have problems with results or speeds of our queries. [CON]
    3a.) JDBC appears to be the "defacto standard" when running Oracle SQL queries from Linux. [PRO]
    3b.) There may be issues with results or speeds of our queries when using JDBC. [CON]
    3c.) Using JDBC requires the explicit-IP of the Oracle server to be defined for each connection. This would require special-handling on the *.rpts at either the source-file (and NOT the CMC level) to change them from DEV -> TEST -> PROD connection. This would result in a lot more Developer / Admin overhead than they are currently used to. [CON]
    ==================
    We would appreciate some advice from anyone who has been down this road before.
    What were your Best Practices?
    What can you add to the Pros and Cons listed above?
    How do we find the "sweet spot" between quality/performance/speed of reports and easy-overhead for the Admins and Developers?
    As always, thanks in advance for your comments.

    Hi,
    I just saw this article and I would like to add some infos.
    First you can quite easely reproduce the same way of working with the odbc entries by playing with the oracle name resolution on the server. By changing some files (sqlnet, tnsnames.ora,..) you can define a different oracle server for a specific name that will be the same accross all environments.
    Database name will be resolved differently regarding to the environment and therefore will access a different database.
    Second option is the possibility to change the connection in .rpt files by an automated way like the schedule manager. This tool is a additional web application to deploy that can change the connection settings of rpt reports on thousands of reports in a few clicks. you can find it here :
    http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/80af7965-8bdf-2b10-fa94-bb21833f3db8
    The last option is to do it with a small sdk script, for this purpose, a few lines of codes can change all the reports in a row.
    After some implementations on linux to oracle database I would prefer also the native connection. ODBC and JDBC are deprecated ways to connect to database. You can use DATADIRECT connectors that are quite good but for volumes you will see the difference.

  • Best practice for photo format: RAW+PSD+JPEG?

    What is the best practice in maintaining format of files while editing?
    I shoot in RAW and import into PS CS5. After editing, it allows me to save as various formats, including PSD and JPEG. PS says that if you want to re-edit the file, you should save as PSD as all the layers are maintained as-is. Hence I'd prefer to save as .PSD. However, in most cases, the end objective is to share the image with others and JPEG is the most suitable format. Does this mean, that for each image, its important to save it in 3 formats viz RAW, PSD and JPEG? Wont this increase the total space occupied tremendously? Is this how most professionals do it? Pls advice.

    Thanks everyone for this continued discussion in my absence over two weeks. Going through it i realize its helpful stuff. During this period, i downloaded Aperture trial and have learnt it (there's actually not much learning, its so incredibly intuitive and simple, but incredibly powerful. Since I used iphoto in the past, it just makes it easier.
    I have also started editing my pics to put them up on my photo site. And over past 10 days, here is the workflow I have developed.
    -Download RAW files onto my laptop using Canon s/w into a folder where i categorize and maintain all my images
    -Import them into Aperture, but letting the photos reside in the folder structure i defined (rather than have Aperture use its own structure)
    -Complete editing of all required images in Aperture (and this takes care of 80-90% of my pics)
         -From within Aperture open in PS CS5 those images that require editing that cannot be done in Aperture
         -Edit in CS5 and do 'Save', this brings them back to Aperture
         -Now I have two versions of these images in Aperture - the original RAW and the new .PSD
    -Select the images that I need to put up on my site and export them to a new folder from where i upload them
    I would be keen to know if someone else follows a more efficient or robust workflow than this, would be happy to incorporate it.
    There are still a couple questions I have:
    1 - Related to PS CS5: Why do files opened in CS5 jump up in terms of their file size. Any RAW  or JPEG file originally btn 2-10 MB shows up as minimum 27 MB in CS. The moment you do some edits and/or add layers, it reaches 50-150MB. This is ridiculous. I am sure I am doing something wrong.  Or is this how CS5 works with everyone.
    2 - After editing a file in CS by launching it from Aperture, I now end up with two versions in Aperture, the original file and the new .PSD file (which is usually 100MB+). I tried exporting the .PSD file to a folder to upload it on my site, and wasnt sure what format and size it would end up with. I got it as a JPEG file within reasonable filesize limits. Is this how Aperture works? Does Aperture allow you options of which format you want to save the file in?

  • Best Practice to use a single root Application Module?

    I was reading in another thread that it may be a good idea to have all application modules nested within a single root application module (AM) so that there is only 1 session maintained for the root AM, versus an individual session for each AM. Is this a best practice? If yes, should the root AM be a skeleton AM (minimal customer service methods), or, should you select the most heavily used AM and nest the other AM's underneath of it?
    In my case, I currenlty have 2 AM's (and will have 3 AM's in the future) each representing a different set of use cases withn the application (i.e., one supports users searches / shopping cart-like functionality, and the second supports an enrollment process.) It could be the case that a user only accesses pages on the web site to do searches (first AM), or only to do enrollment (2nd AM), or, they may access pages of the site that access both AM's. Right now I have 2 separate AM's that are not nested. Should I nest the AM's and define a root AM?
    thanks

    Hi javaX
    The main physical effect of having 2 separate AMs is that they have their own transactions with the database, and presumably sit in the application module pool as their own instances consuming connections from the connection pool. Alternatively a single root AM with 2 nested AMs share a single transaction through the root AM; only the root AM controls the transaction in this scenario.
    As such it's a question of do you need separate transactions or will one suffice?
    How you group your EOs/VOs etc within the AMs is up to you, but usually falls into logical groups such as you have done. If a single transaction is fine, instead of creating multiple AMs, you could instead just create logical package structures instead. Neither method is right or wrong, they're just different ways of structuring your application.
    When you create a nested AM structure, within your ViewController project in the Data Control Palette you'll actually see 3 data controls mapped to each AM. In addition expanding the root AM data control, you'll see the nested AMs again. Create a dummy project with a nested AM structure and you'll see what I mean.
    If you base your page definitions on anything from the root AM and it's children in the Data Control Palette, this will work on the root AM's transaction.
    If you base your page definitions on something from one of the other AM data controls that isn't inside the main root AM in the Data Control Palette, instead of using the root AM's transaction, the separate child AM will be treated as root AM and will have its own transaction.
    The thing to care of when developing web pages is to consistently use the AM and it's nested AMs, or the child AMs directly with their separate transactions, otherwise it might cause a bit of a nightmare debugging situation later on when the same application is locking and blocking on the same records from 2 separate AM transactions.
    Hope this helps.
    CM.

  • Is dao pattern is the best practice in projects

    let me know if dao pattern is the best followed in all almost all the
    projects though finding alternatives to it. please clarify this for me and also i do want to know the best practices of the industry in using design patterns.

    There is no 'best' pattern. It is just all abouthow
    and where to apply them. This is very true,but these are common
    design patterns used in industry for standard
    problems.
    ost of the time patterns are used not for some
    special reason but for more manageability and ease of
    change.So if you have a small application than it's
    ok but if you are working on big application which
    are needed to be maintained over a time and changes
    are frequent.Than its better to start learning about
    patterns because their will be problems which right
    now you can't see but eventually you have to take
    care of.That is either incorrect or phrased poorly.
    Patterns come about because someone analyzes different existing code bases and notes that there are similarities in the way they are built.
    It isn't that they are easier to maintain but rather that because the pattern has similarities it is easier to comprehend, understand the limitations, understand the possible related patterns, etc. That might lead to easier maintainance but it isn't the reason. The reason is because, if and only if, the requirements/architecture lead to a situation where that pattern could be properly used.

  • Best Practice Advice - Using ARD for Inventorying System Resources Info

    Hello All,
    I hope this is the place I can post a question like this. If not please direct me if there is another location for a topic of this nature.
    We are in the process of utilizing ARD reporting for all the Macs in our district (3500 +/- a few here and there). I am looking for advice and would like some best practices ideas for a project like this. ANY and ALL advice is welcome. Scheduling reports, utilizing a task server as opposed to the Admin workstation, etc. I figured I could always learn from those with experience rather than trying to reinvent the wheel. Thanks for your time.

    hey, i am also intrested in any tips. we are gearing up to use ARD for all of our macs current and future.
    i am having a hard time with entering the user/pass for each machine, is there and eaiser way to do so? we dont have nearly as many macs running as you do but its still a pain to do each one over and over. any hints? or am i doing it wrong?
    thanks
    -wilt

Maybe you are looking for

  • Calculated Field in BAM

    Hi I am facing problem in making caluclated field. There are fields in one of the data object named as UpdateTime & UpdateError. I have to make this field as caluclated field and apply following formula UpdateTime = UpdateTime+timeConsumed UpdateErro

  • Permit internal Read Receipts but prevent them from exiting organisation

    http://blogs.technet.com/b/exchange/archive/2011/02/23/3412028.aspx Above article explains how to delete Read Receipts in Exchange altogether. Can I use Exchange to allow Read Receipts between internal users within the organisation but prevent read r

  • How to run file-to-file scenario

    Hello, I have just done my first exercise in XI, File-to-File Scenario. I ran the scenario in RWB in Component Monitoring by giving the Payload from the Message Mapping.  It ran successfully. But I am not able to run the same scenario by giving my so

  • How do I import pictures without getting all the duplicates?

    How do I import photos from an external harddrive without getting all the duplicates on my Macbook pro? is there a program to download for that ?

  • Event passing in JLayeredPane

    Hello, I have the following problem: Using Swing I put JButtons on a JLayeredPane. Every time I click on a button a proper event is raised. However I want to get mouse coordinates all the time the mouse is inside the JLayeredPane. Some wrong solution