What is the best methodology to handle database schema changes after an application has been deployed?

Hi,
VS2013, SQL Server 2012 Express LocalDB, EF 6.0, VB, desktop application with an end user database
What is a reliable method to follow when there is a schema change for an end user database used by a deployed application?  In other words, each end user has their own private data, but the database needs to be expanded for additional features, etc. 
I list here the steps it seems I must consider.  If I've missed any, please also inform:
(1) From the first time the application is installed, it should have already moved all downloaded database files to a separate known location, most likely some sub-folder in <user>\App Data.
(2) When there's a schema change, the new database file(s) must also be moved into the location in item (1) above.
(3) The application must check to see if the new database file(s) have been loaded, and if not, transfer the data from the old database file(s) to the new database file(s).
(4) Then the application can operate using the new schema.
This may seem basic, but for those of us who haven't done it, it seems pretty complicated.  Item (3) seems to be the operative issue for database schema changes.  Existing user data needs to be preserved, but using the new schema.  I'd like
to understand the various ways it can be done, if there are specific tools created to handle this process, and which method is considered best practice.
(1) Should we handle the transfer in a 'one-time use' application method, i.e. do it in application code.
(2) Should we handle the transfer using some type of 'one-time use' SQL query.  If this is the best way, can you provide some guidance if there are different alternatives for how to perform this in SQL, and where to learn/see examples?
(3) Some other method?
Thanks.
Best Regards,
Alan

Hi Uri,
Thank you kindly for your response.  Also thanks to Kalman Toth for showing the right forum for such questions.
To clarify the scenario, I did not mean to imply the end user 'owns' the schema.  I was trying to communicate that in my scenario, an end user will have loaded their own private data into the database file originally delivered with the application. 
If the schema needs to be updated for new application features, the end user's data will of course need to be preserved during the application upgrade if that upgrade includes a database schema change.
Although I listed step 3 as transferring the data, I should have made more clear I was trying to express my limited understanding of how this process "might work", since at the present time I am not an expert with this.  I suspected my thinking
is limited and someone would correct me.
This is basically the reason for my post; I am hoping an expert can point me to what I need to learn about to handle database schema changes when application upgrades are deployed.  For example, if an SQL script needs to be created and deployed
then I need to learn how to do that.  What's the best practice, or most reliable/efficient way to make sure the end user's database is changed to the new schema after the upgraded application is deployed?  Correct me if I'm wrong on this,
but updating the end user database will have to be handled totally within the deployment tool or the upgraded application when it first starts up.
If it makes a difference, I'll be deploying application upgrades initially using Click Once from Visual Studio, and eventually I may also use Windows Installer or Wix.
Again, thanks for your help.
Best Regards,
Alan

Similar Messages

  • Database Log File becomes very big, What's the best practice to handle it?

    The log of my production Database is getting very big, and the harddisk is almost full, I am pretty new to SAP, but familiar with SQL Server, if anybody can give me advice on what's the best practice to handle this issue.
    Should I Shrink the Database?
    I know increase hard disk is need for long term .
    Thanks in advance.

    Hi Finke,
    Usually the log file fills up and grow huge, due to not having regular transaction log backups. If you database is in FULL recovery mode, every transaction is logged in Transaction file, and it gets cleared when you take a log backup. If it is a production system and if you don't have regular transaction log backups, the problem is just sitting there to explode, when you need a point in time restore. Please check you backup/restore strategy.
    Follow these steps to get transactional file back in normal shape:
    1.) Take a transactional backup.
    2.) shrink log file. ( DBCC shrinkfile('logfilename',10240)
          The above command will shrink the file to 10 GB.(recommended size for high transactional systems)
    >
    Finke Xie wrote:
    > Should I Shrink the Database? .
    "NEVER SHRINK DATA FILES", shrink only log file
    3.) Schedule log backups every 15 minutes.
    Thanks
    Mush

  • What is the best practice to handle JPA methods in JSF app?

    I am building a JSF-JPA web app(No EJB).
    I have several methods that has JPA QL inside.
    Because I have to put those methods inside JSF beans to inject EntityManagerFactory (am I right about this?).
    And I do want to separate those methods from regular JSF beans which are used by page authors.
    And I may need to use them in different JSF managed beans.
    My question here is that what is the best practice to handle that?
    I. write a or a few separate JSF Beans and inject them into regular Beans?
    II. write a or a few separate JSF Beans and access them into regular Beans using FacesContext?
    III. others?
    Waiting to hear from you opinions.

    You can create named queries on your Entities themselves then just call entityMgr.createNamedQuery("nameOfQuery");
    Normally, we put these named queries in the class of the entity which will be returned. This allows for all information pertaining to a given entity and all ways of accessing that entity (except em.find() and stuff, of course) to be in one place. As long as the entity is defined in your persistence.xml file, any named queries which reside on that entity will be available through the EntityManager.
    As for the EntityManagerFactory, we normally create an application scope bean which holds the factory itself (because this is a heavy-weight object) and then just get all EntityManager instances from that by injecting this bean into whatever needs it. For example, I might have:
    //emfBB is the injected app scope bean which holds the entity manager factory.
    private EmfBB emfBB;
    private void lookupSomeData()
    EntityManager em = this.getEmfBB().getEmf()
    I hope this answered your question?
    ~Zack
    Edited by: zmarr on Nov 6, 2008 1:29 PM

  • What's the best way to handle new versions of software?

    So we have an Application setup for Adobe Reader (just using Reader as a general example) which is part of our Task Sequence.  When Adobe releases a new version of Reader, what's the best way to handle getting the new version into our SCCM setup.
    I know I can create a new application and do it all over and select the new file, but can I simply replace the files that the application is pointing to and then somehow tell it to update the DPs with the new files?  I'd rather not have to create new
    applications every time if I don't have to.
    Thanks.

    I think continue the way you are doing right now by creating a new application each time there is a version change. Its a clean way to do and this helps with Application life cycle mgmt aswell where you keep track of versions across the environments and
    eventually retire an application. Specially where no changes are made without a change control. This is completely process specific and may not be applicable.
    i do agree with above posted comments about using supercedence option.
    However most of the apps comes with upgrade capabilities from previous versions. So you can upgrade existing version with new one if you do not wish to use supercedence. 
    Thanks 

  • What is the best way to handle input parameter

    When writing sub-vi's, what is the best way to handle input parameter range checking? On the front panel I can choose to have numeric values coerced to be within range, but this does not affect constants or controls wired to the vi when used as a sub-vi. I can build range checking into the vi, but this can result in a cluttered looking vi. Do you have any suggestions.

    As you discovered, the Range and Coercion properties of controls do not work when used in sub-vis.
    Your best option is to go ahead and build your range checking into your sub-vis. If it�s something you will be doing a lot, just make your range checking a sub-vi and drop it where needed. This will keep the clutter to a minimum. You may end up with more than one range checking VI if you need different functionality, but this will still make less clutter and easier re-use.
    Ed
    Ed Dickens - Certified LabVIEW Architect - DISTek Integration, Inc. - NI Certified Alliance Partner
    Using the Abort button to stop your VI is like using a tree to stop your car. It works, but there may be consequences.

  • What is the best way to turn on my ipod after it has been in water? should i just turn it on or should i plug it in and let it turn on on its own?

    what is the best way to turn on my ipod after it has been in water? should i just turn it on or should i plug it in and let it turn on on its own?

    Firstly, give your Pod a lot of time to dry out before trying to use it - at least a week, a fortnight might even be advisable. Water takes a surprisingly long time to evaporate from inside something.
    For a case, what you want is the Griffin Survivor:
    http://www.griffintechnology.com/armored

  • What's the best way to handle this?

    I'm not sure what APIs/setup to use for this situation:
    A company wants to store data projects they do for clients. Each year, the data fields are set (as a result of gov't requirements) and they won't change for any client project for that year. however, the fields required can (and usually do) change every year. So things they require this year, might not be needed the next year and new fields might be introduced.
    While there are likely to be many common fields from year to year, there's no way to guarantee which ones will remain consistent. They also want to be able to do searches on the data and fields, for projects within a year and across years.
    What's the best framework/API/configuration to handle this? EJB? Simple JDBC? If so, how should the database be handled? Won't it have to constantly create new fields in a table? Or is there another way to handle this?
    What's the best way from a "clean architecture" standpoint?

    dang, I really have to start over? I finally got all this stuff working again.  well, hopefully it won't be as big a pain this time since the data won't be coming from a different machine.   After completing the Migration Assistant process, I had to reinput a bunch of serial numbers for apps, reinstall print and mouse drivers, etc...  I've finally got the new machine up and running smoothly and now I gotta start over? Sigh.
    I was hoping that either I could rename the current account after deleting the other one, or just move everything from one account to the other and then delete the 'RJM' account.
    ok, so it sounds like here are the steps I need to take:
    - make another full cloned backup of this current machine in Super Duper
    - reboot this machine using the advice in the first post, wipe everything clean and reinstall the OS
    - create a new account like 'user1' and re-do software update (which is like 2.5 gig worth of stuff) and takes like an hour even on a high speed connection
    - then re-do the migration assistant process to the properly named account
    - then delete the 'user1' account
    does that sound right?

  • What's the best way to handle all my data?

    I have a black box system that connects directly to a PC and sends 60 words of data at 10Hz (worse case scenario). The black box continuously transmits these words, which contain a large amount of data that is continuously updated from up to 50 participants (again worst case scenario) 
    i.e. 60words * 16bits * 10Hz * 50participants = 480Kbps.  All of this is via a UDP Ethernet connection.
    I have LabVIEW reading the data without any problem. I now want to manipulate this data and then distribute it to other PCs on a network via TCP/IP.
    My question is what is the best way of storing my data locally on the interface PC so that I can then have clients request the information they require via TCP/IP. Each message that comes in via the Ethernet will relate to one of the participants, so I need to be able to check if I already have data about that participant - if I do then I can just update it, if I don't I need to create a record for the participant, and if I havn't heard from one for a while I will need to delete it. I don't want to create unnecessary network traffic. I also want to avoid global variables if possible - especially considering that I may have up to 3000 variables to play with.
    I'm not after a solution, just some ideas about how to tackle this problem... I thought I could perhaps create a database and have labview update a table with the data, adding a record for each participant. Alternatively is there a better way of storing all the data in memory besides global variables?
    Thanks in advance.

    Hi russelldav,
    one note on your data handling:
    When  each of the 50 participants send the same 60 "words" you don't need 3000 global variables to store them!
    You can reorganize those data into a cluster for each participant, and using an array of cluster to keep all the data in one "block".
    You can initialize this array at the start of the program for the max number of participants, no need to (dynamically) add or delete elements from this array...
    Edited:
    When all "words" have the same representation (I16 ?) you can make a 2D array instead of an array of cluster...
    Message Edited by GerdW on 10-26-2007 03:51 PM
    Best regards,
    GerdW
    CLAD, using 2009SP1 + LV2011SP1 + LV2014SP1 on WinXP+Win7+cRIO
    Kudos are welcome

  • (workflow question) - What is the best way to handle audio in a large Premiere project?

    Hey all,
    This might probably be suitable for any version of Premiere, but just in case, I use CS4 (Master Collection)
    I am wrestling in my brain about the best way to handle audio in my project to cut down on the time I am working on it.
    This project I just finished was a 10 minute video for a customer shot on miniDV (HVX-200) cut down from 3 hours of tape.
    I edited my whole project down to what looked good, and then I decided I needed to clean up all the Audio using Soundbooth, So I had to go in clip by clip, using the Edit in SoundBooth --> Render and Replace method on every clip. I couldn't find a way to batch edit any audio in Soundbooth.
    For every clip, I performed similar actions---
    1) both tracks of audio were recorded with 2 different microphones (2 mono tracks), so I needed only audio from 1 track - I used SB to cut and paste the good track over the other track.
    2) amplified the audio
    3) cleaned up the background noise with the noise filter
    I am sure there has to be a better workflow option than what I just did (going clip by clip), Can someone give me some advice on how best to handle audio in a situation like this?
    Should I have just rendered out new audio for the whole tape I was using, and then edit from that?
    Should I have rendered out the audio after I edited the clips into one long track and performed the actions I needed on it? or something entirely different? It was a very slow, tedious process.
    Thanks,
    Aza

    Hi, Aza.
    Given that my background is audio and I'm just coming into the brave new world of visual bits and bytes, I would second Hunt's recommendation regarding exporting the entire video's audio as one wav file, working on it, and then reimporting. I do this as one of the last stages, when I know I have the editing done, with an ear towards consistency from beginning to end.
    One of the benefits of this approach is that you can manage all audio in the same context. For example, if you want to normalize, compress or limit your audio, doing it a clip at a time will make it difficult for you to match levels consistently or find a compression setting that works smoothly across the board. It's likely that there will instead be subtle or obvious differences between each clip you worked on.
    When all your audio is in one file you can, for instance, look at the entire wave form, see that limiting to -6 db would trim off most of the unnecessary peaks, triim it down, and then normalize it all. You may still have to do some tweaking here and there, but it gets you much farther down the road, much more easily.Same goes for reverb, EQ or other effects where you want the same feel throughout the entire video.
    Hope this helps,
    Chris

  • What is the best way to handle .mod files in premiere pro cs4?

    I recently got a JVC Everio GZ-MG130u and as I'm sure many of you are aware, it saves footage in the .mod format.
    I have googled this and found quite a few different solutions, but I'm just wondering if anything has changed since some of these solutions were posted, or in other words, what is the best way at this current point in time to handle .mod files in premiere pro cs4?
    As far as I know, the best thing to do is convert the .mod to .avi and then import it into premiere so it can be edited.. Is there a better way to do it than this? Also, by doing it this way, will I have separate audio and video tracks?
    Thanks.

    I have just done a bit of reading, here. All of the quotes that follow are from users who have posted in that thread.
    It seems that there isn't one solid answer on this subject. The thread that I linked to was started 2 years ago, and replied to just 4 ago, so it's relatively current.
    I noticed a couple different interesting statements:
    posted by mmontgomery:
    In the case of .MOD, you are actually getting a MPEG-2 file. The way
    video files work is that there is a codec (COmpressor/DECompressor)
    algorithm and a file wrapper (or extension). A JVC .MOD file is a
    MPEG-2 encoded file, with a .MOD extension.
    You're faced with two
    challenges, first the .MOD file type is only recognized and support by a
    few applications. I think we covered some of those already. The
    interesting thing about wrappers and extensions is that they can be
    dealt with in a variety of ways. Sometimes all you need to do to convert
    the video file to a compatible video file is to change the extension.
    In the case of .MOD files that's not enough. The .MOD wrapper apparently
    does a few more things than just bare a unique extension name. It
    requires a slightly more complicated method to convert that file. That
    is why there is supplied software and that certain third party
    applications have .MOD support.
    (posted 2 years ago)
    This seems to indicate that Ann's solution of simply re-naming the extension is not good enough, unless I am mis-interpreting what she meant.
    However, another user said:
    posted by futball8:
    I edit with Adobe Premiere Pro CS3. All I have to do is simply rename
    the .MOD files as .MPG and then import into PP CS3. It takes a small
    amount of time to conform the audio, but no file conversion is
    necessary. It's a pretty slick workflow and I've never encountered any
    problems editing them this way.
    (posted 5 months ago)
    One can only assume that simply re-naming the extension from .mod to .mpg works in some circumstances, and doesn't work in others. I assume it depends largely on the editing software being used. Perhaps there are still issues that futball8 was simply unaware of or never encountered.
    That said, there seems to be a couple of different real solutions to this problem that I have found:
    1. Simply use an editing program that supports .mod file format. While pe7 and pe8 supposedly support the .mod format, the following should be noted:
    posted by macksgarage:
    While Elements 'supports' these files, it is markedly unstable and  frequently crashes while using the files, though the application is otherwise reliable.  The solution I have arrived at is to repair the  container using ffmpeg. (see #3)
    (posted 5 months ago)
    2. Use a file conversion utility of your choice that will covert .mod to .avi, or another desired format. Import the resulting .avi file into premiere pro cs4.
    3. Use FFmpeg. This seems to be the best solution as it does not convert any audio or video:
    posted by macksgarae:
    If you are not familiar with ffmpeg, it may be a bit of a bear to
    learn, but it's not only useful for this, but functions as a video swiss
    army knife useful for splicing, muxing/demuxing, and rendering just
    about any format into just about any other format.
    ffmpeg is an open source project from the linux world, but it has
    been ported and is supported on windows.  Fetch it here and place it somewhere handy
    on your system.
    To rewrite the container into a nice, standards compliant .mpg file
    that doesn't make applications die, WITHOUT rerendering video or audio
    itself, I use this command.
    ffmpeg -i INFILE.MOD  -acodec copy - vcodec copy OUTFILE.mpg
    This not only renames the file, but actually rebuilds the container
    around unmodified video and audio data, yielding a file which works much
    more stably with Adobe applications, and presumably others as well, as
    ffmpeg's open source development goals result in very standards
    compliant files.
    (posted 5 months ago)
    Now, this seems to properly address the issues that can arise from simply renaming .mod to .mpg, as suggested by Ann. So from this point, I assume you can simply import the .mpg into Premiere Pro cs4 (or any other .mpg compliant program) and edit without issue, but it seems like I remember hearing something about Premiere not liking mpegs or something like that, so in that case, maybe it'd be better to skip this and go with option #2. But, it's been awhile since I've touched any NLE, much less premiere pro cs4, so I could be completely wrong and it may have no problems handling mpegs.
    All of the things that I've quoted here came from the same thread, so I don't know how accurate any of this is, but the people that have posted these things seem fairly knowledgeable. If someone reads all of this and can confirm or deny any of it, it would be much appreciated.
    Option 1 is not really an option for me, because I am sticking with premiere pro cs4 -- I'm not going to get another editor just because it has .mod support. That leaves me with options 2 and 3: Convert to avi, or change the file wrapper/extension properly with FFmpeg and then simply import the resulting .mpg file... Which is better? Or is there yet another solution that I am unaware of that would be even better?

  • What is the best way to optimize database resources in a JSP centered webap

    hi, i am kindda i am new to jsp. so i make database connections on evry page.Assuming i am working on an app where there could be 300 concurrent users, what is the best approach for me to take?
    thanks
    obinna

    java_everywhere wrote:
    hi, i am kindda i am new to jsp. so i make database connections on evry page.JSP shouldn't have anything to do with database access. In any case, you shouldn't be connecting on every page either. You should be recycling connections via a pool.
    Assuming i am working on an app where there could be 300 concurrent users, what is the best approach for me to take?You will have to decide for yourself because we don't know what your app does, how much hardware you have, network latency, etc.

  • What is the best way to handle collections that contains different object

    Hi
    Suppose i have two class as below
    class Parent {
           private String name;
    class Child extends Parent {
          private int childAge;
    }I have a list that can contains both Child and Parent Object but in my jsp i want to display a childAge attribute ( preferrably not to use scriplet )
    what is the best way to achieve this?

    Having a collection containing different object types is already a bad start.
    How are parent and child related to each other? Shouldn't Child be a property of Parent?
    E.g.public class Parent {
        private Child child;
    }In any way, you could in theory just check Object#getClass()#getName(), but this is a nasty design.

  • What is the best way to handle very large images in Captivate?

    I am just not sure the best way to handle very large electrical drawings.
    Any suggestions?
    Thanks
    Tricia

    Is converting the colorspace asking for trouble?  Very possibly!  If you were to do that to a PDF that was going to be used in the print industry, they'd shoot you!  On the other hand, if the PDF was going online or on a mobile device – they might not care.   And if the PDF complies with one of the ISO subset standards, such as PDF/X or PDF/A, then you have other rules in play.  In general, such things are a user preference/setting/choice.
    On the larger question – there are MANY MANY ways to approach PDF optimization.  Compression of image data is just one of them.   And then within that single category, as you can see, there are various approaches to the problem.  If you extend your investigation to other tools such as PDF Enhancer, you'd see even other ways to do this as well.
    As with the first comment, there is no "always right" answer.  It's entirely dependent on the user's use case for the PDF, requirements of additional standard, and the user's needs.

  • What is the best way to automatically delete user profiles after x days of inactivity (school lab environment)?

    I work at a school where we have multiple Mac Carts that have 30 MacBooks per cart. We image the macs every summer to delete the older user profiles but we are looking for way to possible have this done automatically through out the year to help with HDD space. What is the best way to delete user profiles after...say 180 days.... of inactivity automatically? I am open to login hooks, bash scripts, etc. Anything to get the job done. Thanks for any help or advice.

    A search here turned up this post Deleting inactive users
    It appears that the script posted will do as advertised though I would test it out on your systems and under your conditions to see if it does do what you need.
    regards
    Message was edited by: Frank Caggiano - That script looks for users over 21 days. To look for ones over 180 days change the 21 to 180 in the find command.

  • What is the best approach to handle multiple FK with single table.

    If two tables are joined with each other with more than one ways, for example
    MAIN table is (col1, col2,....coln, person_creator_id, person_modifier_id)
    PERSON table is (person_id, name, address,........ phone) etc
    At database level PERSON_CREATOR_FK and PERSON_MODIFIER_FK are defined.
    Objective is to create a report that shows
    col1, col2...coln, person creator name, person modifier name
    If above two objects are imported with FKs in a EUL and discoverer plus is used to create above report. On first inclusion of person name discoverer plus will ask you to pick the join (provided the checkbox to disable this feature is not checked). Once you pick 'person creator' join it will never allow you to pick person modifier name.
    One solution is two create a custom folder with query like
    select col1, col2,...coln,
    pc.name, pc.address,.... pc.phone
    pm.name, pm.address,.... pm.phone
    from main m,
    person pc,
    person pm
    where m.person_id_creator = pc.person_id
    and m.person_id_modifier = pm.person_id
    Second solution is to import the PERSON folder twice in EUL (optionally named one as perosn_creator and other as person_modifier) and manually define one join per table. i.e. join MAIN with PERSON_CREATOR on person_creator_fk and join MAIN with PERSON_MODIFIER table using person_modifier_fk.
    Now discoverer plus will let you drag Name from each person folder without needing to resolve multiple joins.
    Question is, what approach is better OR is there a better way?
    With solution 1 you will not be able to use functions on folder items.
    With solution 2 there is a EUL design overhead of including same object multiple times and then manually defining all join (or deleting unwanted joins), and this could be a problem when you have person_modifier and person_creator in nearly all tables. It could be more complicated if person table is further linked other tables and users want to see that information too. (for instance, if person address is stored in LOCATION table joined with location_id and user want to see both creator address and modifier address....now you will have to create multiple LOCATION folders).
    A third solution could be to register a function in discoverer that return person name when person_id is passed. This will work perfectly for above requirement but a down side is the report will run slower if they need filters on person names (then function will be used in where clause). Also, this solution is very specific to above scenario, it will not work if you want the report developer the freedom to pick any attribute from person table (lets say, person table contain 50 attributes then its not a good idea to register 50 functions).
    Any comments/suggestion will be appreciated.
    thanks

    Hi
    In a roundabout way you have really answered your own question :-)
    In my opinion, the best approach, although by all means not the only approach - see below) would be to have the object loaded as two folders with one join going to the first folder and the second join to the other folder. You would of course name the folders appropriately.
    Here's a workflow that I use all of the time and one that I teach when I'm giving Discoverer Administrator training. It might help you:
    1. Bring in the PERSON folder to begin with
    2. Make all necessary adjustments to bring it up to deployment standard. These adjustments would be: folder name (E.g PERSON_CREATOR), item names, item placement, default positions, default aggregation and so on.
    3. Create or assign the required lists of values
    4. Create any required calculations
    5. Create any required conditions
    6. Create the first join from this folder to MAIN.
    7. Click on the heading for the folder and press CTRL-C.
    8. Click on the heading for the business area and press CTRL-V. A second copy of the folder, complete with all of the adjustments you made earlier will be inserted into the business area.
    Note: joins are not copied, everything else is.
    9. Rename this folder to say PERSON_MODIFIED
    10. Rename the items as appropriate
    11. Add a join from this folder to MAIN - you're done
    Other ideas that I have used and work well would be to use a database view or create a complex folder. Either will work, In both cases you would need to join on some other column other than the ones you referred earlier.
    I hope this helps
    Best wishes
    Michael

Maybe you are looking for

  • Trumpet mute sounds

    I have Jam Pack 4 but find no choices for mutes for the brass section. How would I go about getting the sounds for strsight, cup, and harmon mutes? Thank you....

  • How do I create a new event from a section of a long video clip

    When I try to select a section of a long clip and drag it to a new event in the iMovie event library, the whole clip moves into the new event.  Isn't there a way to just select and move a section to the new event?

  • Can automator use the content of a cell ( in excel) to save a file ?

    Join Date: Mar 2009 Automator & excel Hello there, I'm not so good at automator and I need some help. I have a lot of excelsheet to make. Each sheet is about a customer ans should go to a specific folder. As i am naming the folders by date followed b

  • Systemd-backlight using intel_backlight?

    Hi, The backlight on my new laptop is controlled by values inside the intel_backlight directory (/sys/class/backlight/intel_backlight) however the systemd-backlight service (which saves the current backlight settings at shutdown and restores them at

  • When I print out an email it comes out scrabbled letters and numbers

    I can see the words on the screen, an the From, To, Subject and Date are all clearly in English. Just the content of the email comes out scrabbled letters and words.