Changing internal HD...what is the best to get your data back?

Hello everybody...
I am about changing my macbook pro's internal HD.
From the standard 120GB to a 320GB, 16MB, 7200rpm Western Digital HD.
I have understood that once the new HD is mounted... I will have to format it and then install leopard from the Install DVD or the macbook pro dvds.
Till here I am fine.
My question is:
May I use time machine to get all the data back on the new HD? Applications, Documents, Photos, Music.. everything?
Or I should use something like CCC?
Thank you for your precious help
J

I just instaled a Hitachi Travelstar 7K320 in my MBP 15" and used a combination of CCC, Winclone and Time Machine. The video from OWC and the text from ifixit helped a lot. I put the video on my iphone and used it as I replaced the drive which went remarkably smoothly. Got all my data back and everything works great.
The drive is great no heat or noise issues. I barely fill it whirl if my hand is over the drive bay. The hardest part for me was removing the tape and lifting the ribbon from the surface of the old drive.
Good luck.
BobM

Similar Messages

  • What is the best way to transfer data from a PC to an iMac?

    What is the best way to transfer data from a PC to an iMac?

    If you know how to set up a computer-to-computer Ethernet network, then you can give that a try, but a hard drive will be faster than Ethernet unless you don't have a lot to transfer.
    Mac OS X 10.6 Help- Creating a computer-to-computer network

  • What is the best approach to process data on row by row basis ?

    Hi Gurus,
    I need to code stored proc to process sales_orders into Invoices. I
    think that I must do row by row operation, but if possible I don't want
    to use cursor. The algorithm is below :
    for all sales_orders with status = "open"
    check for credit limit
    if over credit limit -> insert row log_table; process next order
    check for overdue
    if there is overdue invoice -> insert row to log_table; process
    next order
    check all order_items for stock availability
    if there is item that has not enough stock -> insert row to
    log_table; process next order
    if all check above are passed:
    create Invoice (header + details)
    end_for
    What is the best approach to process data on row by row basis like
    above ?
    Thank you for your help,
    xtanto

    Processing data row by row is not the fastest method out there. You'll be sending much more SQL statements towards the database than needed. The advice is to use SQL, and if not possible or too complex, use PL/SQL with bulk processing.
    In this case a SQL only solution is possible.
    The example below is oversimplified, but it shows the idea:
    SQL> create table sales_orders
      2  as
      3  select 1 no, 'O' status, 'Y' ind_over_credit_limit, 'N' ind_overdue, 'N' ind_stock_not_available from dual union all
      4  select 2, 'O', 'N', 'N', 'N' from dual union all
      5  select 3, 'O', 'N', 'Y', 'Y' from dual union all
      6  select 4, 'O', 'N', 'Y', 'N' from dual union all
      7  select 5, 'O', 'N', 'N', 'Y' from dual
      8  /
    Tabel is aangemaakt.
    SQL> create table log_table
      2  ( sales_order_no number
      3  , message        varchar2(100)
      4  )
      5  /
    Tabel is aangemaakt.
    SQL> create table invoices
      2  ( sales_order_no number
      3  )
      4  /
    Tabel is aangemaakt.
    SQL> select * from sales_orders
      2  /
            NO STATUS IND_OVER_CREDIT_LIMIT IND_OVERDUE IND_STOCK_NOT_AVAILABLE
             1 O      Y                     N           N
             2 O      N                     N           N
             3 O      N                     Y           Y
             4 O      N                     Y           N
             5 O      N                     N           Y
    5 rijen zijn geselecteerd.
    SQL> insert
      2    when ind_over_credit_limit = 'Y' then
      3         into log_table (sales_order_no,message) values (no,'Over credit limit')
      4    when ind_overdue = 'Y' and ind_over_credit_limit = 'N' then
      5         into log_table (sales_order_no,message) values (no,'Overdue')
      6    when ind_stock_not_available = 'Y' and ind_overdue = 'N' and ind_over_credit_limit = 'N' then
      7         into log_table (sales_order_no,message) values (no,'Stock not available')
      8    else
      9         into invoices (sales_order_no) values (no)
    10  select * from sales_orders where status = 'O'
    11  /
    5 rijen zijn aangemaakt.
    SQL> select * from invoices
      2  /
    SALES_ORDER_NO
                 2
    1 rij is geselecteerd.
    SQL> select * from log_table
      2  /
    SALES_ORDER_NO MESSAGE
                 1 Over credit limit
                 3 Overdue
                 4 Overdue
                 5 Stock not available
    4 rijen zijn geselecteerd.Hope this helps.
    Regards,
    Rob.

  • What is the best way to save data from GPIB Device in a file?

    HI!
    I have a Keithley SourceMeter and want to save readings in a File along with settings on the front panel and timestamps plus several other infos. What is the best way to do this? Which file type? Any recommendations or hints could help me?
    Thanks

    Hi Andy,
    There are 3 main file formats that you can consider writing your data out to in LabVIEW:
    ASCII
    Binary
    Datalog
    ASCII
    ASCII files are useful because every operating system and almost every application can read/write ASCII format files. Use ASCII files when:
    Other users or applications will need to access the data file.
    You will not need to perform random access file I/O
    File I/O speed is not crucial
    Disk space is not crucial
    Examples within LabVIEW Example Finder: Fundamentals >> File Input and Output >> Write to Text File.vi and Read from Text File.vi
    Binary
    Binary byte stream files are more specific to data storage and retrieval. Use b
    inary files when:
    File I/O will remain in LabVIEW only -- no other applications will be needing to write/read that file. There is no standard formatting for binary files and thus other applications or operating systems may be unable to read the file.
    Files are smaller than ASCII files
    Easier and faster random access to data
    Examples within LabVIEW Example Finder: Fundamentals >> File Input and Output >> Write Binary File.vi and Read Binary File.vi
    Datalog
    When to use datalog:
    If you need to record data with a mixture of types, it can be cumbersome to convert everything to ASCII or to keep track of the binary formatting.
    Datalog format is binary and internal to LabVIEW, so again only use this format if no other applications or operating systems will be needing to perform file I/O on the file.
    Examples within LabVIEW Example Finder: Fundamentals >> File Input and Output >> Write Datalog File
    Example.vi and Read Datalog File Example.vi
    Good luck!
    Kileen C.
    Applications Engineer
    National Instruments

  • What is the best artical for understanding Data Pump

    Hi,
    What is the best artical for understanding the relationship / dependency of NETWORK_LINK with FLASHBACK_SCN or FLASHBACK_TIME .
    Why it is manditory to have NETWORK_LINK , when we are using FLASHBACK_SCN or FLASHBACK_TIME.
    Can some one explain the internal for that dependency
    Thanks
    Naveen

    There's no direct dependency between NETWORK_LINK and FLASHBACK_SCN and FLASHBACK_TIME
    As noted in this Oracle doc,
    If the NETWORK_LINK parameter is specified, the SCN refers to the SCN of the source database
    FLASHBACK_SCN and FLASHBACK_TIME are mutually exclusive.
    http://download.oracle.com/docs/cd/B19306_01/server.102/b14215/dp_export.htm#sthref120

  • What is the best way to audit data

    What is the best way to audit actual changes in the data, that is, to be able to see each insert, update, delete on a given row, when it happened, who did it, and what the row looked like before and after the change?
    Currently, we have implemented our own auditing infrastructure where we generate standard triggers and an audit table to store OLD (values at the beginning of the Before Row timing point) and NEW (values at the beginning of the After Row timing point) values for every change.
    I'm questioning this strategy because of the performance impact it has (significant, to say the least) and because it's something that a developer (confession, I'm the developer) came up with, rather than something a database administrator came up with. I've looked into Oracle Auditing, but this doesn't seem like we'd be able to go back and see what a row looked like at a given point in time. I've also looked at Flashbacks, but this seems like it would require a monumental amount of storage just to be able to go back a week, much less the years we currently keep this data.
    Thanks,
    Matt Knowles
    Edited by: mattknowles on Jan 10, 2011 8:40 AM

    mattknowles wrote:
    What is the best way to audit actual changes in the data, that is, to be able to see each insert, update, delete on a given row, when it happened, who did it, and what the row looked like before and after the change?
    Currently, we have implemented our own auditing infrastructure where we generate standard triggers and an audit table to store OLD (values at the beginning of the Before Row timing point) and NEW (values at the beginning of the After Row timing point) values for every change.You can either:
    1. Implement your own custom auditing (as you currently do)
    2. Flashback Data Archive (11g). Requires license.
    3. Version enable your tables with Workspace Manager.
    >
    I'm questioning this strategy because of the performance impact it has (significant, to say the least) and because it's something that a developer (confession, I'm the developer) came up with, rather than something a database administrator came up with. I've looked into Oracle Auditing, but this doesn't seem like we'd be able to go back and see what a row looked like at a given point in time. I've also looked at Flashbacks, but this seems like it would require a monumental amount of storage just to be able to go back a week, much less the years we currently keep this data.
    Unfortunately, auditing data always takes lots of space. You must also consider performance, as custom triggers and Workspace Manager will perform much slower than FDA if there is heavy DML on the table.

  • What is the best way to retrive data from a Global Variable?

    Here is what I want to do,
    I have several PC's that run different types of tests. I want to use a global variable, running on a single PC, that acts like a sever that can be accessed by the other PC's in my lab. This Global variable will store the hostnames of the different PC's that are currently running each test, along with a description of the test.  Then, a user can access this Global variable to read the different values and select the PC and connect to it's desktop using Remote Desktop in Windows.
    Is it possible to write data to the Global variable that is running on the single PC?
    What is the best way to do this? Does anyone have a sample VI?
    What is the best way to then read the data from the Global variable?
    (I will probably use an array\cluster to store the hostnames.) 

    Another pre-LV8 idea...
    A functional global can be accessed using VI-Server and called using "call by reference".
    This approach harnesses the TCP functionality built into the VI-Server to manage the conncetion.
    This can be pretty quick and (if the functional global is written correctly) will support buffered- mixed data types. (Try to do that with the Shared Variable  ).
    Just another idea,
    Ben
    Ben Rayner
    I am currently active on.. MainStream Preppers
    Rayner's Ridge is under construction

  • Replacing MacBook Pro's optical drive with SSD. What is the best way to migrate data?

    I have a time machine backup on external drive.
    I would like to:
    1. clean install Lion on the new SSD (120 GB)
    2. restore apps from my backup on the SSD
    3. keep only data on the existing HDD
    My questions:
    What is the best way to make it?
    Can I just keep the existing system on my non-SSD HDD, and after having Lion on SSD keep just data and delete the system libraries? Or is it better to format the whole disk and restore data from time machine backup (so it is not fragmented...).
    Thanks for any tips in advance!
    Antonin

    OK, thanks a lot.
    And after reformatting the old HDD can I - can I tell Time Machine where to recover my folders (apps --> SSD, data --> old HDD)?
    I mean when Time Machine starts recovering my 450 GB of files onto 120 GB SSD will it ask me to decide what put where?

  • What is the best way to move data from one array to another

    I'm going to be moving data from one array to a larger array on the same RAID but different controller. (I have some extra extra drives I'm also going to be installing Retrospect so I can't just restore from a backup.)
    The RAID has 450GB of production files, fonts etc.
    What is the best way to move the data over?
    I saw that someone had suggested using ditto. Would that be better than MacMV?
    I also own Bru LE so I could use that.
    Any advice would be appreciated.
    Thanks,
    Paul

    Ditto is a great option -- probably the best.
    ditto -rsrc src_folder /Volumes/targetvolume/targetfolder

  • What's the best way to extract data (a substring) from a string?

    Hi,
    I have a field being returned from a function call and the data looks like this:
    sfaqwe4|89uuuroeoi0|kjg3j90493  (It's data...pipe...data...pipe...data)
    What is the best technique to use to extract the middle set of data between the two pipes?
    Is the any prewritten method to separate the data, or do I have to loop thru the field looking for the pipes, etc.?
    Thanks for your help,
    Andy

    <<Copy paste from http://careerabap.blogspot.com/2009_08_01_archive.html - user notified, points removed>>
    Hi,
    The WRITE statement is what we use to substring a field. The syntax is as follows:
    WRITE fieldname+starting_position(field_length) to variable
    The fieldname is the source (or input). The plus sign precedes the starting position of the substring.  The first position in the string is 0. Immediately after (no space) comes the length of the substring you are going to use. You can also substring the variable that you are writing the string to (the target).
    You can also use function module 'STRING_SPLIT_AT_POSITION'.
    Best Regards,
    Edited by: nihad omerbegovic on Dec 14, 2009 3:03 PM
    Edited by: Matt on Dec 20, 2009 4:13 PM

  • What is the best way to store data for this project?

    hey everyone,
    I have been subscribed to this for a while, not sure if I have ever actually asked anything though.
    I have a project on the go for myself/portfolio
    It is a booking sheet, where by I have a GUI that has a diary system of a day followed by time slots. This also has a date picker that can change the date of the booking sheet
    I want to be able to store mainly strings and ints,
    I need to be able to store, retrieve and on occasion change some data.
    I was looking at using something called JExcelAPI but I cant get that to work at all, I asked for assistance but was refered to here.
    what would be the best way to implement this data storage?
    davyk

    Hey everyone,
    Back again,
    I got this this little snippet of code working but want to ask you guys for a little bit of help on it. if thats ok?
    try
         Class.forName("sun.jdbc.odbc.JdbcOdbcDriver");
         String dataSourceName = "mdbTEST";
         String dbURL = "jdbc:odbc:" + dataSourceName;
         Connection con = DriverManager.getConnection(dbURL, "","");
         // try and create a java.sql.Statement so we can run queries
         Statement s = con.createStatement();
         s.execute("create table TEST1234567 ( column_1 char(27), column_2 char(150), column_3 char(150), column_4 char(150), column_5 char(150), column_6 char(150))"); // create a table
         s.execute("insert into TEST1234567 values('"+date+"','"+a+"','"+b+"','"+c+"','"+d+"','"+e+"',)"); // insert some data into the table
         s.execute("select column_7 from TEST1234567"); // select the data from the table
         ResultSet rs = s.getResultSet(); // get any ResultSet that came from our query
         if (rs != null) // if rs == null, then there is no ResultSet to view
                    while ( rs.next() ) // this will step through our data row-by-row
              /* the next line will get the first column in our current row's ResultSet
              as a String ( getString( columnNumber) ) and output it to the screen */
                   System.out.println("Data from column_2: " + rs.getString(1) );
         s.execute("drop table TEST1234567");
         s.close(); // close the Statement to let the database know we're done with it
         con.close(); // close the Connection to let the database know we're done with it
    catch (Exception err)
         System.out.println("ERROR: " + err);
         err.printStackTrace();
    }there are more columns, but i cut this code down.
    my question is:
    I think I want a method with an if statement to see whether the table is created or not, if not create it, but how do I go about this? I have searched the API and google, but my brain is fried.
    Also do I always have to do the try/catch and have the code of Class.forname to Statement s in all methods that want to deal with the table?
    davy

  • What is the best way to consolidate data on two Macs?

    Hi All,
    About to embark on a small project to move all my data on to my MacBook before finally saying goodbye to my trusty iMac.  In order to do this I'll be upgrading the HDD on the MacBook to a large one and using Time Machine to restore all the original data onto the new drive.  This bit I'm good with as I did it before when I upgraded the drive on my old iMac. 
    The main problem I have is that the majority of my photos, music, videos and documents are on my old iMac and I'd like to discuss the best way to merge these with a smaller amount of data that the MacBook has on it.  Both machines are running Snow Leopard and I'm not aware of any tools that are currently available to do something like this although I am aware that Lion is supposed to have features that would consolidate everything to the cloud and might make things quite easy.  This is something I would be willing to spend the £30 or so doing so if anyone has any experience of using Lion to merge everything across Macs to the cloud please let me know if you feel this would be a suitable approach.
    Just to summarise the state my data is in currently and the approach I currently intend to take. 
    Have the Mobile Me service so Mail, Address Book, Calendars and all that stuff are all synced across both machines anyway. 
    Applications - Some stuff on the old Mac that I no longer user but nothing here that I really want to migrate.  Some specific application stuff is mentioned below. 
    Movies - This is the bit that I'm most concerned about.  Have a lot of legacy iMovie stuff on the old iMac which I'd like to keep.  iDVD as well.  Not sure of the best way to get this across and maybe it even warrants a separate discussion. Aside from that most of my older video stuff and projects are on the iMac with the newer stuff on the MacBook.  I'm unaware of any function to export and import this but I have a large HDD which I can use to copy stuff over and am going to look into this a bit further.  There will be a small amount of duplication here but this is manageable. 
    Documents - Will just copy these across and organise them as and when I need them.  Again, a small amount of duplication but this is manageable. 
    Music - Pretty much all my stuff the old iMac.  Can just export this and copy it onto the MacBook. 
    Pictures - Bit complicated this.  Most of my stuff on the old iMac but some newer albums on the new MacBook.  No duplication so presuming I can just export the library off the old iMac and import it on to the new MacBook. 
    No websites or anything like that. 
    iPhone currently syncs with iMac but once I have all the stuff over it should not be too hard to change this. 
    Just my user account on each machine that I really want to keep.  Shared documents and other users accounts all contain temporary stuff that I'm happy to lose or archive. 
    Given this is anyone able to offer advice as to 1) Whether this is the best approach? 2) Have I considered everything? 3) Are there any tools that might help me?
    Many thanks,
    Tom

    OK, seeing as no-one replied (presumably because a lot of this information is on the forums in bits elsewhere) here's how I've got on so far.
    Applications - just went through them.  About the only one I needed was my media server app.  Just downloaded and re-installed, had a quick look back though my email to find the license key and it all went on fine.  Installation never seemed quite right on my old machine so solved that problem too. 
    Movies - New iMovies just copied across the clips and projects into their respective folders.  Seems to have worked but haven't checked it all that thoroughly.  Some duplicate footage here but I can trim this out at some point when I get a chance to go through here. 
    Documents - Just copied these across. 
    Photos - used an app called iPhoto Library Manager.  You can download for free but have to pay to use the part that consolidates your libraries.  Possibly if I was willing to spend a bit more time I could have got away without using this but given I didn't know the state of my different libraries and just how many duplicates I had this was too much of a convenience to ignore.  Also got my library into a state where I can now spend a few hours organising it a bit better with Faces / Events etc. 
    Not attempted Music or iPhone sync yet as been stuck trying to solve a problem with my power adapter. 

  • What is the best way to append data from one field to another?

    I have the following table, table1:
    Name Null? Type
    MAIL_ID NOT NULL NUMBER(10)
    LAST_NAME VARCHAR2(45)
    FIRST_NAME VARCHAR2(45)
    MIDDLE_INITIAL VARCHAR2(1)
    ADDRESS_1 VARCHAR2(45)
    CITY VARCHAR2(35)
    STATE VARCHAR2(2)
    ZIP VARCHAR2(10)
    REMARKS VARCHAR2(200)
    The table has duplicate entries that need to be removed. The records that will be removed need the
    data in the Remarks column appended to the Remarks data of the record that is not deleted.
    For example, the following listing shows a sample of the duplicate records.
    Mail ID Last Name First Name M Address City St ZIP Remarks
    189 BROWN STEPHEN 6706 MOESER LN EL CERRITO CA 94530-2909 Sf7#s124,f16#d7996(NML)[Cl#117][Ml#1649][NMf1#d288][NCf9#d319][SNl#e62]
    211023 BROWN STEPHEN B 6706 MOESER LN EL CERRITO CA 94530 RLl#a12047[IDl#i398]
    287796 BROWN STEPHEN B 6706 MOESER LN EL CERRITO CA 94530 SNl#e1163
    The following listing shows how the kept record should appear after the duplicate records are deleted.
    Mail ID Last Name First Name M Address City St ZIP Remarks
    189 BROWN STEPHEN 6706 MOESER LN EL CERRITO CA 94530-2909 Sf7#s124,f16#d7996(NML)[Cl#117][Ml#1649][NMf1#d288][NCf9#d319][SNl#e62]RLl#a12047[IDl#i398]SNl#e1163
    I have the process of deleting duplicates working but have yet to determine the best way to move
    the Remarks data from the deleted records to the preserved record.
    I know there are probably various ways to approach this.
    Any suggestions will be greatly appreciated!
    Here is the sql for deleting duplicates.
    DELETE FROM table1
    WHERE mail_id in (SELECT mail_id FROM table1
              where not first_name = 'Null' and
    not last_name = 'Null' and
              not city = 'Null' and
              not state = 'Null'and
    not last_name = 'Anon'
              minus
              select min(mail_id) from table1
              group by first_name, last_name, city, state, address_1, organization, title);
    THANKS in advance!!!!

    Here's quick and dirty example probably a better way to do it, but this is what I came up with quickly.
    My table looks like this:
    MAIL_ID LAST FIRST PHONE REMARKS
    123 Ruff Shawn 555-555-5555 Called 10-10-04
    135 Ruff Shawn 555-555-5555 Called 10-12-04
    201 Ruff Shawn 555-555-5555 Called 10-19-04
    The code below will concatenate the remarks column from the rows, and delete the 135 and 201 rows, then update the 123 row with the concatenated remarks.
    declare
    l_remarks varchar2(500);
    l_min_mail_id number;
    begin
    select min(mail_id) into l_min_mail_id
    from test
    group by last, first, phone;
    select remarks into l_remarks from test where mail_id = l_min_mail_id;
    for i in (select mail_id, remarks from test
         where last = 'Ruff'
              and first = 'Shawn'
              and phone = '555-555-5555'
              and mail_id <> l_min_mail_id)
    loop
    l_remarks := l_remarks||','||i.remarks;
    delete from test where mail_id = i.mail_id;
    end loop;
    update test set remarks = l_remarks where mail_id = l_min_mail_id;
    commit;
    end;
    Hope this helps.

  • What is the best software to record DAT tape to computer?

    Hi guys,
    I have some old DAT recording of my rock band from 1971-74. I want to transfer the DAT to my Macbook Pro retina. My plan would be to hit the "play" button on the DAT and record the whole tape to the MacBook Pro...and then go back, separate the songs I want to keep and add some EQ and compression (if I can figure out how that works ). I have the setup ready to go from a Behringer UCA222 from the DAT into USB on the MacBook Pro. I also plan to send the recording to my external thunderbolt HD. And I probably want to save the recordings at a higher quality and then to mp3.
    What would you suggest would be the best software to use to transfer on a project like this?
    I don't have Logic, and don't want to spend the $ for this project. I am looking at using GarageBand, but don't know if that eats up too much HD space and computer processing. I've read that Audacity might be an option, but not sure how the quality would compare to GarageBand.
    Any other suggestions or comments would be really appreciated.
    Thanks,
    Bob

    I use to face this issue every now and then (as a freelancer) when I use to cut on FCP7.
    if the material is DV and was captured using Sony VEGAS, Premiere or Edius, the footage generally worked fine. You'll find most people will tell you to convert it regardless, however I found no issues working with them.
    A problem you'll definitely face is HDV material. Most PC editing software write HDV in their own codec, which can't be read in FCP.
    The only thing I can think of is to try converting HDV meaterial to QuickTime using something like MPEG Streamclip, or something similar.
    Good luck

  • Tabular Modeling. What is the best practice for importing data into VS to limit the records in the designer?

    Should I wrap the queries in a procedure with a @StartDate and @EndDate and create a test partition to pass a small date range? 
    Or can i use the Table properties screen to put the command there and will it run and not be affected or affect the partitions?This would be nice if this SQL statement on this screen was independent of the partitions and I could just leave it with the the
    command text = EXEC TransactionDetail '2014-01-01', '2014-05-31' Especially since if you have many tables that load based on a date range. i would not want to jump in and change that query on all of them.
    Is there a a way to have a parameter in the project so all tables would get the same @startDate and @EndDate so I could change it in one place?
    And I am not stuck to these questions\options, If there is a better way to mass change the queries to run a subset of data for the designer I'd like to hear it.
    Thank You,
    Phil

    Hi Phil,
    According to your description, you are looking for the best way to control the rows that are loaded into a table, right?
    When importing data to a table of tabular model, we can apply filters to control the rows that are loaded into a table. After you have imported the data, you cannot delete individual rows. However, you can apply custom filters to control the way that
    rows are displayed. Rows that do not meet the filtering criteria are hidden. For the detail information about it, please refer to the link below.Filter Data in a Table (SSAS Tabular)
    If I have anything misunderstand, please point it out.
    Regards,
    Charlie Liao
    If you have any feedback on our support, please click
    here
    Charlie Liao
    TechNet Community Support

Maybe you are looking for

  • CS4 Master Collection - double trouble

    I bought the CS4 Master Collection in 2009, I then installed it on my work station. Some months later I bought a laptop and then installed the sofware on that as well. After a while I was forced to re-format my work station that had the first innstal

  • IOS 4.0 Battery Fix.......

    Been reading a lot about battery issues on the 3GS after you upgrade to 4.0. People seem to think the multitasking is causing the drain, or the wifi, or ___. I had lower then expected battery life when i did the upgrade. I decided to back up my data

  • Prob & Solution: Unable to make RDP connection using newly added Microsoft Account to win10 tech preview hosted on AzureVM

    Problem description: After VM deployment is finished, remote desktop into the Windows 10 client OS with the created admin credentials Open PC Settings -> Users and Accounts -> manage other users (this is the new UX not the old classic add user dialog

  • 16:9 and not 16:9

    Something to do with how quicktime works is confusing me, may you explain it please? A few months ago I was using QT 7.1.x. When I exported a file with QT conversion from FCP from a 16:9 sequence, and made sure i selected my desired frame size and th

  • Parse files and store to "timeseries" for ouput to jpg/png?

    Hi. I am trying to parse through some dirs for known files and store the content + name of dir in a "timeseries" (the contents are csv and name of dir is string - hence "timeseries"), for later output to PNG. The file structure will be: dir1/filename