X201 - Initial Thoughts

My x201 just arrived this afternoon -  patience to all those still waiting for their machines to clear the Louisville depot, it'll be worth it!
A handful of initial reactions:
Hooray for minimal packaging!!  - Since this thing already has a hefty Carbon Footprint associated with the shipping, its nice to see Lenovo pair down any extras.  Just the box, cable, and one plastic bag with the manual (only about 20 pages) - oh yeah, and the computer.
The machine is really light (6cell config) - my point of reference is a 9cell x60 tablet (about 4.7lbs) so dropping down to 3lbs is nice. 
Its a touch wider than the x60, but more narrow by at least an inch or two. Thickness is right around 1" -  pretty much the same as the x60. 
Mine's configurated with the SSD, so its as silent as can be - I heard the CPU fan throttle (barely audible) just once in the last half dozen startups. 
Comes plastered with 5 (quickly removed) badges for everything: Energy Star, Win7, Intel i7, Verizon, and Lenovo.  A sharp knife or Xacto blade will get rid of them easily.
The latch to open the lid is a little more quirky than what I'm used to on the x60 - that one just slid right open, on the x201, it sticks just a little bit.  Same goes for the physical WiFi switch on the left side. Its just a little bit harder to push into place than the one I'm used to. 
The one USB-slot being yellow is just a Thinkpad design feature, like the power plug right? I didn't bother firing up PC Wizard yet, but I'm pretty sure that there's no USB 3.0 support on this thing. 
Setting up Windows was easy. Once past that, then the real work began. This thing is *loaded* with bloatware. Verizon WWAN, Office trial, antivirus trial, a lot of Lenovo software (some of it is useful and I kept! but most was unnecessary) - I spent about 15 minutes going through a couple dozen entries on the Remove Programs screen just trimming the waste.  This is my only major disappointment so far - with as much crap as manufacturers are getting for loading their machines with crap, its a wonder that they continue to do so. Heck, doesn't Lenovo offer a machine pre-installed with Linux (one of the Ubuntu variants?) 
Looks like there is a Lenovo parition of about 10GB (only 3 used) - I'll be getting rid of that pretty soon. 
Having both the Trackpad and Trackpoint is nice - but I'm so used to the Trackpoint that I haven't really used the Trackpad much yet. 
The integrated Intel HD graphics that come on the i7 (and i5?) chips looks like it'll hold up just fine.  Hulu at full screen with absolutely no hiccups.  
Haven't checked the RAM on this machine to see if its EPP-ready.  Will get around to it eventually.
If anyone has questions, I'm happy to try to answer. 

OK, so you should be pleased with the results.
I switched back to the Intel Controller that the computer came loaded with and TRIM is enabled (when I ran the test from the Command Prompt, I got a "0")
In case you're curious its the:  "Intel 5 Series 6 Port SATA AHCI Controller"
As for the R/W times, there's really not too much difference between the the Intel and the standard Win 7 driver. 
I'd post the images, but too lazy to go find some place to host them.
This is all running CrystalDiskMark 2.2, BTW:
Win 7 Driver:
5 Tests / 50MB
Seq R: 182.9
Seq W: 178.3
512 R: 155.4
512 W: 134.2
4k R: 15.56
4k W: 7.4
Intel  Driver:
5 Tests / 50MB
Seq R: 175.5
Seq W: 175.6
512 R: 144.3
512 W: 136.5
4k R: 13.95
4k W: 6.7
As for the Firmware upgrade for the Drive, I'm not sure that the particular FW upgrade is applicable to this drive. 
I just ran PC Wizard for the specs and here's what the Drive is:
Model : SAMSUNG MMCRE28G8MXP-0VBL1 
Revision (Firmware) : VBM1EL1Q 
The PDF at the Firmware page lists a different Revision number altogether. 

Similar Messages

  • Unable to view pdf created in Live Cycle Designer ES2-initially thought to be a user/OS issue

    Unable to view pdf created in Live Cycle Designer ES2.  I initially thought this was a user / OS issue when I created a document for someone who is new to a MAC laptop.  She could not view the document through email.  Unfortunately, I began seeing the same error in my own document folders when searching for another document showing as icons instead of a list.  I can open the file without a problem although I see the error she sees only while viewing the icons in my folder.  I am using a Windows 7 PC. Now, I also know that if the document is downloaded, it can be viewed.
    Other notes:
    If trying to access the form via the internet, the same error is seen through Chrome, Firefox, and Mozilla but NOT through IE
    Everyone seems to have the latest or a very recent READER
    The form is compatible with Reader versions 7 and up
    Again, downloading from the internet to the computer appears to allow the file to open properly
    Document cannot be viewed on the Galaxy Tab 2 via Chrome or the pre-installed Internet Browser, nor can it be viewed through the Reader after download to tablet. I did not try on an Apple iPad.
    All parties involved are up-to-date with virus protection.
    Below is a link to the exact message received when trying to open the document.
    https://www.dropbox.com/s/wmjqzwyriovg9vi/Adobe%20Error.pdf

    You're on to something KJ!  Yes the form was created in LiveCycle Designer ES2 which came bundled with my Adobe X Pro.  I began creating a new form yesterday and found that I could not preview the form, rendering this same "error" instead.  I ran a repair on my Adobe and at first it seemed to fix the issue but after making some changes to the form I tried to preview again and couldn't.  Here is what I get when I try to preview my forms in Designer ES2: 
    When I click the OK button, it then gives me that single static page as mentioned above in previous posts.
    I searched Adobe yesterday trying to figure out how I could repair the LiveCycle Designer or if there was some sort of patch that I haven't gotten but was not able to find anything.
    (Sorry for the delay in response, I've been on vacation.)
    Message was edited by: AngelaC

  • Unable to view pdf - initially thought to be user / OS issue for new user to MAC.

    Unable to view pdf created in Live Cycle Designer ES2.  I initially thought this was a user / OS issue when I created a document for someone who is new to a MAC laptop.  She could not view the document through email.  Unfortunately, I began seeing the same error in my own document folders when searching for another document showing as icons instead of a list.  I can open the file without a problem although I see the error she sees only while viewing the icons in my folder.  I am using a Windows 7 PC. Now, I also know that if the document is downloaded, it can be viewed. 
    Other notes: 
    If trying to access the form via the internet, the same error is seen through Chrome, Firefox, and Mozilla but NOT through IE
    Everyone seems to have the latest or a very recent READER
    The form is compatible with Reader versions 7 and up
    Again, downloading from the internet to the computer appears to allow the file to open properly
    Document cannot be viewed on the Galaxy Tab 2 via Chrome or the pre-installed Internet Browser, nor can it be viewed through the Reader after download to tablet. I did not try on an Apple iPad.
    All parties involved are up-to-date with virus protection.
    Below is a link to the exact message received when trying to open the document.
    https://www.dropbox.com/s/wmjqzwyriovg9vi/Adobe%20Error.pdf

    Sorry, this is a user to user forum and we're just customers who help out when we can, so things don't always happen right away.
    Reader, Regardless of OS or device, is as the name implies, only a Reader.  There are different mail settings between Mac Mail Outlook and Thunderbird as well as Andsroid's mail app, whose name escpoaes me.
    Have you checked in the Live Cycle Forum? There are people there with FAR more experience using LC developed forms. It's part of my Creative Suite, but I've never even opened it myself.

  • Maybe premature, but challenging. Initial thoughts?

    We all know MPE is in the future sometime. We also know the GTX 285 will be supported. We don't know about the GTX 295.
    If you were in the market for a new video card and you see these choices at approximately the same prices:
    and consider the difference in clock speeds, while the difference in memory is almost negligent, and consider these differences:
    what would be the better performer? A 702 MHz clock with only 240 CUDA cores versus a 576 MHz clock with 480 CUDA cores?
    I know it is premature and the details on the new GTX 470/480 still have to come out (March 26-th) but what would be your initial thoughts?
    I'm leaning towards the GTX 295, but it is nothing more than a gut feeling.
    Dennis, can you give your opinion? (and look at my PM to you).

    295 will not be supported as of now. adobe may change thier mind assuming nvidia allows them.
    cores would be king....
    Scott
    ADK

  • Initial thoughts on Lynnfield versus Bloomfield

    Intel has released the next generation Nehalem CPU's.
    The first generation, known as Bloomfield, comprises the i7-9xx series of CPU's.
    The new generation is known as Lynnfield and comprises the i5 and the i7-8xx series.
    So what does that mean for our new editing rig? Should we run out and opt for this new generation, or would we be better off with the established Bloomfield? Here are some initial thoughts.
    The Bloomfield platform uses the X58 chipset and a LGA-1366 socket. The Lynnfield uses the P55 chipset and a LGA-1156 socket. That is an important difference, because P55 motherboards are less expensive than X58 motherboards. But, like Johan Cruyff used to say, every advantage has it's disadvantage. Next year we will be seeing the next generation CPU's, based on 32 nm technology, with 6 cores and hyper-threading. A simple BIOS update will allow these new CPU's to be mounted on X58 motherboards. That is not possible with the P55 boards. So, P55 is less expensive but also less future proof. X58 is more expensive but can also support the new 6 core CPU's.
    On to the new CPU's.
    The difference between the i5 and i7-8xx is hyper-threading. The i5 has no hyper-threading. Apart from clock speed they have the same architecture and the same TDP, 95W. In comparison to Bloomfield there are a number of distinctive differences. The two QPI (Quick Path Interface) links have disappeared. No more triple channel memory, only dual channel DDR3. In its place has come an on-die PCIe controller with 16 lanes. The Bloomfield supports 36 PCIe lanes, but not on-die. This is an important handicap of the Lynnfield archtecture, because it precludes the use of other multilane PCIe cards, other than the video card.
    First tests show that the different memory controller does not have any relevant impact, whether triple channel DDR3-8500 on the Bloomfield, or dual channel DDR3-10600 on the Lynnfield is used. However, only the high-end P55 motherboards have 6 DIMM slots to allow 12 GB of RAM, other motherboards often have only 4 DIMM slots and are thus limited to 8 GB.
    The new heatsinks still suck, so if you want to get this new CPU, invest in a good CPU cooler.
    The Turbo mode has been significantly improved and Windows 7 has been optimized to use it to advantage with core parking. Turbo mode can give a performance increase of more than 10% with applications that are not multi-threaded. CS4 is multithreaded, so the advantage of the turbo mode is probably very small.
    Lynnfields uncore is faster than Bloomfields, but the downside is that the on-die PCIe controller limits overclocking at stock voltages. Lynnfield is not good at overclocking, thanks to the PCIe being tied to BCLK, unless you increase vCore.
    Intel positions these CPU's as mid level, not high-end and I think they did a good job. The CPU's are good and fast and leave all AMD processors in the dust.
    The i5-750 is a great entry level CPU, it is nicely priced and delivers good performance, but lacks hyper-threading.
    The i7-860 is on the same performance level as the i7-920, but more expensive, which can be offset by less expensive mobo's, with all their limitations. Also not as easily overclocked.
    The i7-870 performs between the i7-940 and i7-950, but is easily beaten by an overclocked i7-920 for less than half the price.
    All these CPU's do a good job, but the limitations are there as well:
    1. Only 16 PCIe lanes
    2. Stock voltage overclocking is lacking
    3. No support for 6-core Gulftown CPU's
    The major improvement with Lynnfield is the Turbo mode. That makes it even more difficult to choose Bloomfield or Lynnfield, because the Gulftown successor will undoubtedly improve on that even more and choosing Lynnfield now requires a new mobo and a Gulftown if you want to go the 6-core route.

    I would like to add one new entry into this mix.  The Intel Clarksfield new i7 series, while not the complete full power of the current i7 chips this new addition to the i7 family will be a mobile processor and it is supposed to be announced on Sept 23 giving laptops a new quad core lease on life.
    "Quad core in future laptops
    The new mobile platform from Intel code-named Calpella will supposedly be launced on September the 23rd. The Calpella platform is designed for the quad core Clarksfield processors, which feature the Core i7 line-up. The Calpella platform is designed for high-end laptops and will open up for mass uasage of quad core CPUs in mobile computers. For now there will be only 3 Core i7 mobile CPUs:    
    * Core i7-720QM       1.60 GHz       256 kb L2 cache per core       6 Mb unified L3 cache       45W TDP   
    * Core i7-820QM       1.73 GHz       256 kb L2 cache       8 Mb unified L3 cache       45W TDP   
    * Core i7-920XM       2.00 GHz       256 kb L2 cache       8 Mb unified L3 cache       55W TDP
    All three CPUs come in 989-pin mPGA packaging and with an integrated dual-channel DDR3 memory controller. They all support HyperThreading so 8 processes can be run simultaniously. Because of the memory controller the Clarksfield CPUs will use a little more power than existing Core 2 Quad mobile CPUs, but overall system power usage is expected to be about the same".
    "Taking a performance improvement over existing mobile processors, Intel Core i7 720QM evaluated at $ 364, 820QM - at $ 546 and 920XM - at $ 1054, the source"

  • Initial thoughts on Jdev 10.1.3 3673 build

    Much better guys...
    Lets be honest 10.1.12 was only going to convince someone who has been locked in a room on a very remote island, and been exposed to nothing but Oracle technologies, that Oracle was serious... but I actually quite like this...
    A little bit about my background:
    Up to very recently I was a IT contractor based in the UK who in recent years has worked with IBM RSA (Eclipse 3 based), stand-alone Eclipse, NetBeans and IntelliJ (which is still by far the best Jave IDE but unfortunately few companies like paying for IDE's these days....). As a contractor I am sure most of you can imagine that although I am not the one who has signed the cheques, I have often been the one who has made the decision on which technology a lot of companies have adopted. (God I am exposing myself to a lot of abuse here!!). I've now "joined the enemy" and gone permie.
    I am pretty new to JDeveloper, although I have toyed around with it in the past; I have usually got so infuriated by its pi$$ poor integration with external tools such as Ant and CVS, and incredibly annoying bugs that I have dismissed it out of hand. However, it is pleasing to see that a lot of the integration issues have been addressed in this issue, and someone in Oracle has finally realised that Java developers refactor Java classes and packages frequently (especially in the early stages of a development), and we would prefer it if doing so did not completely bollox up the project files... Infact - how about actually doing what it says on the tin??
    Well, this version almost does... It still has bugs refactoring models at package level, and I am not completely comfortable with the CVS integration, although it is certainly as good as RSA.
    The benefits to JDeveloper - the ADF BC integration is really good when you get used to it... Oracle should focus on this and help with integrating with it - especially with ADF rich client development, and stop pretending that vendor neutral houses writing thin client web apps might adopt it... Look, we are using Oracle DB, we want Oracle ADF BC, so tell us how to use it efficiently with online help. I have masses of web development experience, and I know how to do that... What I want to do is write a rich ADF JClient/ Swing app with Jdeveloper - Infact, I have got quite a way into doing it now, with not much thanks to the in built help!
    And how about releasing the Dockable window manager you guys use for JDeveloper?? I am currently using JIDE as a dockable window manager, and I would like some alternatives. Obviously I am aware that a lot of the development world is using Eclipse/ SWT for this, and I don't like it per se, but I can live with its annoyances.
    And why have you shipped JDeveoper with JGoodies 1.0.4 when 1.0.5 has been out for ages? (ndeed 1.0.6 is out on the same day as production JDeveloper, but I would n't expect you to have that. ;-))
    I hope this post stimulates some thought and interesting comments.
    Cheers......Dean

    Dave,
    These are some of the things I have noticed about the CVS integration with that are a bit "unusual".
    1) Importing a module - It did not successfully check out the module afterwards... Yet I quit JDeveloper (10.1.3), and then go back in and I could then check out the module.
    2) The update facilty does not always seem to "update" the repository if a file has been recently checked in? However, if I click on the CVS Navigator and expand my repository, and then back to the Application view it then seems to have refreshed my view of CVS, and it will then work. I wondered if this was caused by conflicts with my project files, as JDeveloper does have the nastly habit of updating them frequently... However, 2 other developers have also noticed this, and we have now switched back to updating whole mobules using Tortoise as it just seems more reliable.
    How often does it pole the CVS server??
    On the plus side- I like the revision comparison tab panel. ;-) It is better than most IDE inbuilt CVS diff tools I have seen - not as good as Beyond Compare though.
    About my CVS configuration :-
    Server is on Windows 2003, using CVSNT (latest version - 2.5.03 I think I installed).
    Client - I changed my CVS executable to be the one shipped with Tortoise CVS, as I have it installed, and I wanted the ability to view my repository from the CVS navigator (I don't think the one shipped with Oracle 10g lets you do this). Client version is 2.5.02 I think... I would have to check that as I writing this from home.
    Oh, and I have a CVSROOT environment viarable set, which lets JDev setup my CVS connection automatically.
    I hope this info helps.
    Cheers.....Dean

  • Initial thoughts on Aperture 2

    1. Interface buttons nicely grouped and overall cleaned up
    2. Tabs for projects, metadata, adjustments nice, but I cannot see a preference for "my set" of default adjustment tools: will have to look at plist
    3. Preferences pane much better
    Now the meat and two veg ....
    4. Adjustment sliders are very smooth and result in an easier to achieve balance for an image. That said, still have to use <Option> or key in zero to get a slider back to zero.
    5. Straighten, with or without crop is utter joy. Granted it should never have been the dog it was under 1.x, but kudos for fixing. Even works pretty fluidly at 100% zoom, though somewhat more hesitant than full screen. Tried 2 up with one image cropped and zoomed and straighten still worked well.
    6. RAW v2 looks slightly more saturated with slightly less noise by default vs. 1.1, and the additional sliders are useful. Overall (Canon 20D) 1.1 was a good RAW algorithm, so the differences are more subtle from v2 but welcome. Unsure I'll reprocess many, but the great news is that RAW reprocessing is WAAAAAAAAAAAAAY faster.
    7. Recovery and Blackpoint work really well and make H'light/ Shadows somewhat less needful .... so why are they there by default? Again, I really wanted "My Adjustment Tools" preset.
    8. No curves. C'mon Aperture team.
    9. Lift/stamp of auto-exposure copies the actual values, not the instruction. Still useless therefore as a batch concept.
    10. Retouch/patch/clone work pretty fluidly. Some hesitancy on retouch, but not a problem. Good news is that it appears better than CS3 at determining edges and texture, meaning a simple wipe across a complex background doesn't pick up invalid pixels and retains underlying detail. Pretty darn impressive. Also, didn 't notice any slow-down after applying 20 retouches and then trying to straighten (a killer in v1.x). Fabulous!
    11. Haven't tried export yet to see whether it's faster or whether it actually outputs the full image (the missing quadrants bug in jpeg's esp). But you can work alongside exports now and responsiveness is v.good.
    12. All above working alongside Thumbnail generation (~4,500) and Previews (~4,500). The good news is that they took a back seat to editing and adjustments and no longer compromised performance.
    13. Importing and editing images in parallel is amazingly good. Hardly noticed any lag due to disk I/O etc.
    So, I couldn't end on 13, could I?
    14. Overall it is a very very good first impression. No bug or crash or slowdown, SBOD etc. after 1 hrs use. Impressive.

    I would like to add one new entry into this mix.  The Intel Clarksfield new i7 series, while not the complete full power of the current i7 chips this new addition to the i7 family will be a mobile processor and it is supposed to be announced on Sept 23 giving laptops a new quad core lease on life.
    "Quad core in future laptops
    The new mobile platform from Intel code-named Calpella will supposedly be launced on September the 23rd. The Calpella platform is designed for the quad core Clarksfield processors, which feature the Core i7 line-up. The Calpella platform is designed for high-end laptops and will open up for mass uasage of quad core CPUs in mobile computers. For now there will be only 3 Core i7 mobile CPUs:    
    * Core i7-720QM       1.60 GHz       256 kb L2 cache per core       6 Mb unified L3 cache       45W TDP   
    * Core i7-820QM       1.73 GHz       256 kb L2 cache       8 Mb unified L3 cache       45W TDP   
    * Core i7-920XM       2.00 GHz       256 kb L2 cache       8 Mb unified L3 cache       55W TDP
    All three CPUs come in 989-pin mPGA packaging and with an integrated dual-channel DDR3 memory controller. They all support HyperThreading so 8 processes can be run simultaniously. Because of the memory controller the Clarksfield CPUs will use a little more power than existing Core 2 Quad mobile CPUs, but overall system power usage is expected to be about the same".
    "Taking a performance improvement over existing mobile processors, Intel Core i7 720QM evaluated at $ 364, 820QM - at $ 546 and 920XM - at $ 1054, the source"

  • A2107A initial thoughts and questions

    I bought one of these on Christmas Eve and I'm generally impressed. Debated between the Lenovo and another Kindle Fire, but chose Lenovo for the sdcard option and the mostly vanilla Android OS.
    Some issues, though set to USA spell check is not working correctly. RAM is totally off, it clogs up and basically must be rebooted to play video or anything else. Video is messed up on hulu plus, voice out of sync with picture. This think badly needs firmware/Jelly Bean update.
    Questions: can I use a male micro USB cord to female USB cord to connect a USB stick to this to transfer files, play video? Is that button thing under the camera lens cover--southwest from the camera lens--a reset button? If so, why isn't it mentioned in documentation anywhere?
    Overall I like this tablet for the price but feel Lenovo needs to fix some bugs asap, expected a bit more for a Lenovo product.
    Solved!
    Go to Solution.

    I purchased this for $149.99 CAD at Best Buy here in Canada a couple of weeks ago.
    I could not find any Samsung Galaxy Tab 2 Tablets, they had been selling for the same price, they had sold out. Probably a bait and switch scam.
    The Lenovo is a nice tablet, it work well and it is shy of memory.
    I did the update/upgrade download of Android and then every time I booted it was upgrading and optimizing all the apps I had installed. Apparently, this is an artifact of Android 4.0.3.
    I have been desperately trying to get a development environment set up to create an app, so I needed to get an ADB USB driver installed, on my Windows 7 64 bit system,  I used the following driver,
    <http://support.lenovo.com/en_US/downloads/detail.page?DocID=DS022366>
    and modified the android_winusb.inf file by adding two entries for the A2107A-F,
    [Google.NTx86]
    ;Lenovo A2107A-F
    %SingleAdbInterface% = USB_Install, USB\VID_17EF&PID_7435&MI_01
    [Google.NTamd64]
    ;Lenovo A2107A-F
    %SingleAdbInterface% = USB_Install, USB\VID_17EF&PID_7435&MI_01
    This works now and I created and ap with Eclipse and ran it on the A2107A-F tablet.
    I was also able to make the LeTools (Syncrhoization Software) work with the tablet as well.
    <http://support.lenovo.com/en_US/downloads/detail.page?&LegacyDocID=5004>
    Lenovo Product staff have dropped the ball on this product. They did not get all the requisite pieces in place before putting this to market. Maybe the product team was too small, maybe the product team did not have some experienced mentors guding them. Lenovo has to do better, they should have had a specific website set up for this product with all the requisite pieces in place and everything nicely laid out. I spent time at various Chinese Lenovo websites and the file download links pointed to mega-download non-lenovo websites that had all expired. I was expecting a much better experience from Lenovo than this.
    For otthers that wish to develop apps for Android, a good strarting point to guide you in acquiring and installing all the tools, check out PhoneGap.
    All the best.

  • Initial thoughts

    I've only spent a few hours in LR4. At this point. I am pleased. At first look I didn't see much difference - beyond the obvious Map, Book and other tweaks. Once I started using the develop module, I was amazed at the power of LR4. It is a little daunting at first. I have to make smaller moves to make the same changes, but once I got used to that, I was impressed at how quickly I could "fix" challenging shots (the other flash didn't fire, shooting too fast and very poorly underexposed).
    I look forward to the final version. Next, my wish list... in a different post.

    And give it a few weeks. You'll be so in love your friends will have to make an appointment to see you.
    I love my Mac Pro. Absolutely loveit!
    Tim...

  • Initial Thoughts After Installation

    I am just trying to guage what most users who administer previous OSX Server think of this new server OS. I'm a bit lost and it's missing a bunch of features, I prepped my Macpro over to a SSD raid for the new Mtn Lion Server and kinda was hoping I would see the same configuable options but at this point coming out of the gate, I'm a bit disappointed.
    Wondering what most are thinking, I proably need to poke around and learn more before judging it.

    I never managed to have lion server working properly outside file sharing. I found OS X server easier in particular for the certificate.
    have a look to my post:
    TIPS: OS X server installation lessons learnt (for non Guru)

  • Hts723232l9sa60 Hitachi 7200rpm 3260gb hdd) on t500 initial thoughts

    I wanted to offer praise for the t500 and it's upgradeability.  I just put a 7200 rpm drive in (stock 160gb 5400 rpm).  It has been running for the better part of 3 hours and everything is perfect.
    Good notes:
    WIndows index rating of drive went to 5.9 from 5.3-5.4
    It has stayed very quiet
    And stayed cool All under normal loads
    Misgivings:
    The storage space amount
    All in all I know I know most people shouldn't see much heat or noise with an upgrade like that, but my experiance in the notebook realm has been that even a subtle change can cause unwanted heat or noise.  It is an upgrade worth making
    if no others are presented.  I can only imagine what the change to ssd will be like.

    I wanted to offer praise for the t500 and it's upgradeability.  I just put a 7200 rpm drive in (stock 160gb 5400 rpm).  It has been running for the better part of 3 hours and everything is perfect.
    Good notes:
    WIndows index rating of drive went to 5.9 from 5.3-5.4
    It has stayed very quiet
    And stayed cool All under normal loads
    Misgivings:
    The storage space amount
    All in all I know I know most people shouldn't see much heat or noise with an upgrade like that, but my experiance in the notebook realm has been that even a subtle change can cause unwanted heat or noise.  It is an upgrade worth making
    if no others are presented.  I can only imagine what the change to ssd will be like.

  • First Mac... and my initial thoughts.

    I'm 27 and getting a late start on school. Specifically for graphic design. About two years ago I realized I needed to be working on a Mac, and last Friday I finally got one!
    I decided on the MacBook Pro after considering the fact that I'd be taking it to and from school, and because I really just wanted a notebook. I even bought a refurb MBP with the Core Duo, but returned it because I decided on the Core 2 Duo.
    Anyway... so far I couldn't be happier. Moving all the stuff over from my PC was a breeze. There wasn't that much stuff so it didn't take long. I am very surprised at how easy it is to learn the OS and get a feel for it. I bought an Airport Express to use and was thrilled with how easy it was to set up the wireless network. The "just works" phrase is right on so far.
    My wife who was a little upset about the price tag forgot all about the money aspect of it when I showed her the book I made in iPhoto of our daughters 3rd birthday. When she realized she wouldn't have to scrapbook any more she decided the Mac was a keeper.
    The only test (for me) that remains is installing Adobe CS2. I called Adobe and was surprised when they offered to trade my PC version for the mac version for only $159. Plus it's an upgrade to the full retail version 2.3 instead of my educational version. But I've heard and read about problems installing it on MBP's so I hope I do not run into any of that. I'm also hoping that running it through Rosetta is not that bothersome.
    But anyway, I just wanted to post a positive experience here for others to read. So far I'm glad to have joined the club!

    My MacBook Pro has the entire creative suite installed on it. There were really no problems. However, please keep in mind that Photoshop CS2 and other adobe applications will be slower on the MacBook Pro and any Mac that has an Intel chip than the ones that have a PowerPC processor. This is because Adobe has not yet made a universal creative suite for the mac. This will happen in time, but things may not work correctly.
    MacBook Pro, iMac G5, iMac G3   Mac OS X (10.4.8)  

  • X201 USB becomes unresponsi​ve during Win7 Install

    This is crazy! 
    Here's the lowdown....
    I recently purchased an X201 Laptop (Type 3680-FZ3 / Product ID 3680FZ3) and from the factory it had a 32 bit version of Win7 Pro. So, with my 8GB of ram installed, it would only address 2.93GB.
    Sucky, but easily resolved, I want to install Win7 x64, something I have done HUNDREDS of times in my carrer as a Technician.
    The basics:
    Removed OEM Intel 80gb SSD with win7 x86 installed
    Brand new, blank intel SSD installed (240GB)
    bios updated to the latest version available on Lenovo's site, v1.40 (6QET70WW), all values set to DEFAULT
    Used Windows 7 USB tool to image a MICROSOFT PROVIDED ISO of windows 7 pro x64, tool created the USB successfully (16gb Supertalent Express Duo, formatted NTFS)
    Plugged into "always on" USB port (yellow)
    The issue:
    Turned on, Booted to USB, windows says "Loading Windows Files" and makes it through that
    Then when the Windows 7 splash screen appears (Starting Windows) the system HANGS completely (caps Lock unresponsive, status light on USB shows idle status)
    It hangs right before the colorful windows logo above "starting windows" can appear. Permanently, mynd y0u. Waiting 2+ hours did not help,
    Trobleshooting:
    Tried installing to another SSD, Samsung EVO 128gb same result
    Tried installing my ISO to another USB Stick, SanDisk Cruzer 4gb, same result
    Tried burning image to a DVD, then booted using an external USB CDROM, same result
    Tried another ISO: Windows 7 Ultimate x64, same result (Checksum of both images is VALID)
    Reset bios to default settings *again*, same result
    Tried every USB port the x201 has to offer, same result.
    Disabled Intel AMT, any security settings, same result
    Read through EVERY lenovo Forum post related to the X201, found NOBODY with this same issue.
    Read every Google search result, what a treat that was, no solution found
    Going the extra mile:
    you have the ability to boot the windows 7 install environment Verbosely using F8 during boot. All services/drivers load up.... UNTIL YOU GET TO DISK.SYS. after that, it hangs. Every possible combination of "troubleshooting" leads me here
    The realization:
    I love this computer... or do I? 
    Day 3 of trying to figure this out... I think i'm going insane.
    I swear I heard the laptop mocking my wife and then it tried persuading my children into taking candy from strangers. Please help...

    Hello,
    The X201 is a four year old system, so I don't think it was new.  Here's how the 3680-FZ3 shipped as originally configured:
    i5-540M(2.53GHz), 4GB RAM, 80GB Solid State Drive, 12.1in 1280x800 LCD, Intel HD Graphics, Intel 802.11agn wireless, WWAN option, Modem, 1Gb Ethernet, UltraNav, Secure Chip, Camera, 6c Li-Ion, Win7 Pro 32
    My initial thought is to check and see if the new Intel 240GB SSD has updated firmware and, if so, install and see if that makes any difference.  I would also suggest using a different brand of USB flash drive, just to rule out any possible incompatibility issues with the drive in question. 
    I seem to recall that some models of USB flash drive enumerate as mass storage devices (Base Class 0x08) but use atypical subclasses to show up as a HDD, etc., which can cause some problems with BIOS/UEFI firmware that expects the bootable USB device to be a removable storage container subtype (optical drive, typical USB flash drive, etc.).  I believe the Windows 7 Pro x64 files fit on a 4GB USB flash drive, althought not with much room to spare.
    Failing that, you might just want to try burning the ISO to a DVD±R and using that to install.  If problems persist at that point, I'd suspect a corrupt .ISO image.
    Regards,
    Aryeh Goretsky
    I am a volunteer and neither a Lenovo nor a Microsoft employee. • Dexter is a good dog • Dexter je dobrý pes
    S230u (3347-4HU) • X220 (4286-CTO) • W510 (4318-CTO) • W530 (2441-4R3) • X100e (3508-CTO) • X120e (0596-CTO) • T61p (6459-CTO) • T43p (2678-H7U) • T42 (2378-R4U) • T23 (2648-LU7)
      Deutsche Community   Comunidad en Español Русскоязычное Сообщество

  • Some Thoughts On An OWB Performance/Testing Framework

    Hi all,
    I've been giving some thought recently to how we could build a performance tuning and testing framework around Oracle Warehouse Builder. Specifically, I'm looking at was in which we can use some of the performance tuning techniques described in Cary Millsap/Jeff Holt's book "Optimizing Oracle Performance" to profile and performance tune mappings and process flows, and to use some of the ideas put forward in Kent Graziano's Agile Methods in Data Warehousing paper http://www.rmoug.org/td2005pres/graziano.zip and Steven Feuernstein's utPLSQL project http://utplsql.sourceforge.net/ to provide an agile/test-driven way of developing mappings, process flows and modules. The aim of this is to ensure that the mappings we put together are as efficient as possible, work individually and together as expected, and are quick to develop and test.
    At the moment, most people's experience of performance tuning OWB mappings is firstly to see if it runs set-based rather than row-based, then perhaps to extract the main SQL statement and run an explain plan on it, then check to make sure indexes etc are being used ok. This involves a lot of manual work, doesn't factor in the data available from the wait interface, doesn't store the execution plans anywhere, and doesn't really scale out to encompass entire batches of mapping (process flows).
    For some background reading on Cary Millsap/Jeff Holt's approach to profiling and performance tuning, take a look at http://www.rittman.net/archives/000961.html and http://www.rittman.net/work_stuff/extended_sql_trace_and_tkprof.htm. Basically, this approach traces the SQL that is generated by a batch file (read: mapping) and generates a file that can be later used to replay the SQL commands used, the explain plans that relate to the SQL, details on what wait events occurred during execution, and provides at the end a profile listing that tells you where the majority of your time went during the batch. It's currently the "preferred" way of tuning applications as it focuses all the tuning effort on precisely the issues that are slowing your mappings down, rather than database-wide issues that might not be relevant to your mapping.
    For some background information on agile methods, take a look at Kent Graziano's paper, this one on test-driven development http://c2.com/cgi/wiki?TestDrivenDevelopment , this one http://martinfowler.com/articles/evodb.html on agile database development, and the sourceforge project for utPLSQL http://utplsql.sourceforge.net/. What this is all about is having a development methodology that builds in quality but is flexible and responsive to changes in customer requirements. The benefit of using utPLSQL (or any unit testing framework) is that you can automatically check your altered mappings to see that they still return logically correct data, meaning that you can make changes to your data model and mappings whilst still being sure that it'll still compile and run.
    Observations On The Current State of OWB Performance Tuning & Testing
    At present, when you build OWB mappings, there is no way (within the OWB GUI) to determine how "efficient" the mapping is. Often, when building the mapping against development data, the mapping executes quickly and yet when run against the full dataset, problems then occur. The mapping is built "in isolation" from its effect on the database and there is no handy tool for determining how efficient the SQL is.
    OWB doesn't come with any methodology or testing framework, and so apart from checking that the mapping has run, and that the number of rows inserted/updated/deleted looks correct, there is nothing really to tell you whether there are any "logical" errors. Also, there is no OWB methodology for integration testing, unit testing, or any other sort of testing, and we need to put one in place. Note - OWB does come with auditing, error reporting and so on, but there's no framework for guiding the user through a regime of unit testing, integration testing, system testing and so on, which I would imagine more complete developer GUIs come with. Certainly there's no built in ability to use testing frameworks such as utPLSQL, or a part of the application that let's you record whether a mapping has been tested, and changes the test status of mappings when you make changes to ones that they are dependent on.
    OWB is effectively a code generator, and this code runs against the Oracle database just like any other SQL or PL/SQL code. There is a whole world of information and techniques out there for tuning SQL and PL/SQL, and one particular methodology that we quite like is the Cary Millsap/Jeff Holt "Extended SQL Trace" approach that uses Oracle diagnostic events to find out exactly what went on during the running of a batch of SQL commands. We've been pretty successful using this approach to tune customer applications and batch jobs, and we'd like to use this, together with the "Method R" performance profiling methodology detailed in the book "Optimising Oracle Performance", as a way of tuning our generated mapping code.
    Whilst we want to build performance and quality into our code, we also don't want to overburden developers with an unwieldy development approach, because what we'll know will happen is that after a short amount of time, it won't get used. Given that we want this framework to be used for all mappings, it's got to be easy to use, cause minimal overhead, and have results that are easy to interpret. If at all possible, we'd like to use some of the ideas from agile methodologies such as eXtreme Programming, SCRUM and so on to build in quality but minimise paperwork.
    We also recognise that there are quite a few settings that can be changed at a session and instance level, that can have an effect on the performance of a mapping. Some of these include initialisation parameters that can change the amount of memory assigned to the instance and the amount of memory subsequently assigned to caches, sort areas and the like, preferences that can be set so that indexes are preferred over table scans, and other such "tweaks" to the Oracle instance we're working with. For reference, the version of Oracle we're going to use to both run our code and store our data is Oracle 10g 10.1.0.3 Enterprise Edition, running on Sun Solaris 64-bit.
    Some initial thoughts on how this could be accomplished
    - Put in place some method for automatically / easily generating explain plans for OWB mappings (issue - this is only relevant for mappings that are set based, and what about pre- and post- mapping triggers)
    - Put in place a method for starting and stopping an event 10046 extended SQL trace for a mapping
    - Put in place a way of detecting whether the explain plan / cost / timing for a mapping changes significantly
    - Put in place a way of tracing a collection of mappings, i.e. a process flow
    - The way of enabling tracing should either be built in by default, or easily added by the OWB developer. Ideally it should be simple to switch it on or off (perhaps levels of event 10046 tracing?)
    - Perhaps store trace results in a repository? reporting? exception reporting?
    at an instance level, come up with some stock recommendations for instance settings
    - identify the set of instance and session settings that are relevant for ETL jobs, and determine what effect changing them has on the ETL job
    - put in place a regime that records key instance indicators (STATSPACK / ASH) and allows reports to be run / exceptions to be reported
    - Incorporate any existing "performance best practices" for OWB development
    - define a lightweight regime for unit testing (as per agile methodologies) and a way of automating it (utPLSQL?) and of recording the results so we can check the status of dependent mappings easily
    other ideas around testing?
    Suggested Approach
    - For mapping tracing and generation of explain plans, a pre- and post-mapping trigger that turns extended SQL trace on and off, places the trace file in a predetermined spot, formats the trace file and dumps the output to repository tables.
    - For process flows, something that does the same at the start and end of the process. Issue - how might this conflict with mapping level tracing controls?
    - Within the mapping/process flow tracing repository, store the values of historic executions, have an exception report that tells you when a mapping execution time varies by a certain amount
    - get the standard set of preferred initialisation parameters for a DW, use these as the start point for the stock recommendations. Identify which ones have an effect on an ETL job.
    - identify the standard steps Oracle recommends for getting the best performance out of OWB (workstation RAM etc) - see OWB Performance Tips http://www.rittman.net/archives/001031.html and Optimizing Oracle Warehouse Builder Performance http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - Investigate what additional tuning options and advisers are available with 10g
    - Investigate the effect of system statistics & come up with recommendations.
    Further reading / resources:
    - Diagnosing Performance Problems Using Extended Trace" Cary Millsap
    http://otn.oracle.com/oramag/oracle/04-jan/o14tech_perf.html
    - "Performance Tuning With STATSPACK" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-sep/index.html?o50tun.html
    - "Performance Tuning with Statspack, Part II" Connie Dialeris and Graham Wood
    http://otn.oracle.com/deploy/performance/pdf/statspack_tuning_otn_new.pdf
    - "Analyzing a Statspack Report: A Guide to the Detail Pages" Connie Dialeris and Graham Wood
    http://www.oracle.com/oramag/oracle/00-nov/index.html?o60tun_ol.html
    - "Why Isn't Oracle Using My Index?!" Jonathan Lewis
    http://www.dbazine.com/jlewis12.shtml
    - "Performance Tuning Enhancements in Oracle Database 10g" Oracle-Base.com
    http://www.oracle-base.com/articles/10g/PerformanceTuningEnhancements10g.php
    - Introduction to Method R and Hotsos Profiler (Cary Millsap, free reg. required)
    http://www.hotsos.com/downloads/registered/00000029.pdf
    - Exploring the Oracle Database 10g Wait Interface (Robin Schumacher)
    http://otn.oracle.com/pub/articles/schumacher_10gwait.html
    - Article referencing an OWB forum posting
    http://www.rittman.net/archives/001031.html
    - How do I inspect error logs in Warehouse Builder? - OWB Exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case10.pdf
    - What is the fastest way to load data from files? - OWB exchange tip
    http://www.oracle.com/technology/products/warehouse/pdf/Cases/case1.pdf
    - Optimizing Oracle Warehouse Builder Performance - Oracle White Paper
    http://www.oracle.com/technology/products/warehouse/pdf/OWBPerformanceWP.pdf
    - OWB Advanced ETL topics - including sections on operating modes, partition exchange loading
    http://www.oracle.com/technology/products/warehouse/selfserv_edu/advanced_ETL.html
    - Niall Litchfield's Simple Profiler (a creative commons-licensed trace file profiler, based on Oracle Trace Analyzer, that displays the response time profile through HTMLDB. Perhaps could be used as the basis for the repository/reporting part of the project)
    http://www.niall.litchfield.dial.pipex.com/SimpleProfiler/SimpleProfiler.html
    - Welcome to the utPLSQL Project - a PL/SQL unit testing framework by Steven Feuernstein. Could be useful for automating the process of unit testing mappings.
    http://utplsql.sourceforge.net/
    Relevant postings from the OTN OWB Forum
    - Bulk Insert - Configuration Settings in OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=291269&tstart=30&trange=15
    - Default Performance Parameters
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=213265&message=588419&q=706572666f726d616e6365#588419
    - Performance Improvements
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270350&message=820365&q=706572666f726d616e6365#820365
    - Map Operator performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=238184&message=681817&q=706572666f726d616e6365#681817
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Poor mapping performance
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=275059&message=838812&q=706572666f726d616e6365#838812
    - Optimizing Mapping Performance With OWB
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=269552&message=815295&q=706572666f726d616e6365#815295
    - Performance of mapping with FILTER
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=273221&message=830732&q=706572666f726d616e6365#830732
    - Performance of the OWB-Repository
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=66271&message=66271&q=706572666f726d616e6365#66271
    - One large JOIN or many small ones?
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=202784&message=553503&q=706572666f726d616e6365#553503
    - NATIVE PL SQL with OWB9i
    http://forums.oracle.com/forums/thread.jsp?forum=57&thread=270273&message=818390&q=706572666f726d616e6365#818390
    Next Steps
    Although this is something that I'll be progressing with anyway, I'd appreciate any comment from existing OWB users as to how they currently perform performance tuning and testing. Whilst these are perhaps two distinct subject areas, they can be thought of as the core of an "OWB Best Practices" framework and I'd be prepared to write the results up as a freely downloadable whitepaper. With this in mind, does anyone have an existing best practices for tuning or testing, have they tried using SQL trace and TKPROF to profile mappings and process flows, or have you used a unit testing framework such as utPLSQL to automatically test the set of mappings that make up your project?
    Any feedback, add it to this forum posting or send directly through to me at [email protected]. I'll report back on a proposed approach in due course.

    Hi Mark,
    interesting post, but I think you may be focusing on the trees, and losing sight of the forest.
    Coincidentally, I've been giving quite a lot of thought lately to some aspects of your post. They relate to some new stuff I'm doing. Maybe I'll be able to answer in more detail later, but I do have a few preliminary thoughts.
    1. 'How efficient is the generated code' is a perennial topic. There are still some people who believe that a code generator like OWB cannot be in the same league as hand-crafted SQL. I answered that question quite definitely: "We carefully timed execution of full-size runs of both the original code and the OWB versions. Take it from me, the code that OWB generates is every bit as fast as the very best hand-crafted and fully tuned code that an expert programmer can produce."
    The link is http://www.donnapkelly.pwp.blueyonder.co.uk/generated_code.htm
    That said, it still behooves the developer to have a solid understanding of what the generated code will actually do, such as how it will take advantage of indexes, and so on. If not, the developer can create such monstrosities as lookups into an un-indexed field (I've seen that).
    2. The real issue is not how fast any particular generated mapping runs, but whether or not the system as a whole is fit for purpose. Most often, that means: does it fit within its batch update window? My technique is to dump the process flow into Microsoft Project, and then to add the timings for each process. That creates a Critical Path, and then I can visually inspect it for any bottleneck processes. I usually find that there are not more than one or two dogs. I'll concentrate on those, fix them, and re-do the flow timings. I would add this: the dogs I have seen, I have invariably replaced. They were just garbage, They did not need tuning at all - just scrapping.
    Gee, but this whole thing is minimum effort and real fast! I generally figure that it takes maybe a day or two (max) to soup up system performance to the point where it whizzes.
    Fact is, I don't really care whether there are a lot of sub-optimal processes. All I really care about is performance of the system as a whole. This technique seems to work for me. 'Course, it depends on architecting the thing properly in the first place. Otherwise, no amount of tuning of going to help worth a darn.
    Conversely (re. my note about replacing dogs) I do not think I have ever tuned a piece of OWB-generated code. Never found a need to. Not once. Not ever.
    That's not to say I do not recognise the value of playing with deployment configuration parameters. Obviously, I set auditing=none, and operating mode=set based, and sometimes, I play with a couple of different target environments to fool around with partitioning, for example. Nonetheless, if it is not a switch or a knob inside OWB, I do not touch it. This is in line with my dictat that you shall use no other tool than OWB to develop data warehouses. (And that includes all documentation!). (OK, I'll accept MS Project)
    Finally, you raise the concept of a 'testing framework'. This is a major part of what I am working on at the moment. This is a tough one. Clearly, the developer must unit test each mapping in a design-model-deploy-execute cycle, paying attention to both functionality and performance. When the developer is satisifed, that mapping will be marked as 'done' in the project workbook. Mappings will form part of a stream, executed as a process flow. Each process flow will usually terminate in a dimension, a fact, or an aggregate. Each process flow will be tested as an integrated whole. There will be test strategies devised, and test cases constructed. There will finally be system tests, to verify the validity of the system as a production-grade whole. (stuff like recovery/restart, late-arriving data, and so on)
    For me, I use EDM (TM). That's the methodology I created (and trademarked) twenty years ago: Evolutionary Development Methodology (TM). This is a spiral methodology based around prototyping cycles within Stage cycles within Release cycles. For OWB, a Stage would consist (say) of a Dimensional update. What I am trying to now is to graft this within a traditional waterfall methodology, and I am having the same difficulties I had when I tried to do it then.
    All suggestions on how to do that grafting gratefully received!
    To sum up, I 'm kinda at a loss as to why you want to go deep into OWB-generated code performance stuff. Jeepers, architect the thing right, and the code runs fast enough for anyone. I've worked on ultra-large OWB systems, including validating the largest data warehouse in the UK. I've never found any value in 'tuning' the code. What I'd like you to comment on is this: what will it buy you?
    Cheers,
    Donna
    http://www.donnapkelly.pwp.blueyonder.co.uk

  • Need advice on presenting an Initiative to Sun

    I am exploring channels to present ideas to Sun for a new initiative. Does anybody know how to go about this process? I came across the Java Community Process page and initially thought of submitting a Java Specification Request(JSR), but only paying members can submit JSR's. Besides technically this is not a JSR so that would not be the appropriate channel anyway.
    I am also reluctant to approach any third party member because the idea I have is really very simple, and I wouldn't want anyone to steal it from me sideline me the same way that Microsoft is known to have done with small companies/individuals. Does anybody have any information about Sun's reputation in this regard?
    If this idea gets implemented, it could very well be Sun's answer to Microsoft's refusal to include Java in the XP operating system. Even though might sound very ambitious/premature to say this at this stage, the impact that it can create in the computing world can be as big as that created by Java itself.
    Only serious replies please. Replies may also be sent to [email protected]

    Here are some options.
    The least intrusive way to get your idea seen by a Sun developer is to include it in a request for enhancement (RFE) that has not yet been evaluated. You could add a detailed comment to an existing RFE that's tangentially related to your suggestion but that has not yet been evaluated, or you could create a new RFE. However, this option isn't confidential.
    Another strategy would be to contact the spec lead for a tangentially related JSR.
    Another strategy would be to sideline a Sun or IBM rep at a JavaOne conference.
    But probably the most respectable route would be to create a mockup implementation of your suggestion and integrate it with the most closely related opensource project with which Sun or IBM developers are involved (such as Apache, Tomcat, GNOME, Blackdown, Mono, etc.).
    Best luck.

Maybe you are looking for