W530 SSD and Memory help needed for high performanc​e systems.

Hi Guys,
After dealing with lots of laptops that got destroyed by daily use, our organizations is considering ordering 25 Thinkpad W530 laptops from Lenovo and we hope that they last a little longer.
Our main usage is video editing and design so we have chosen the best that Lenovo has to offer as far as hardware.
Since Lenovo charges arm and a leg for SSD and 16GB memories, we have decided to purchase these separately and upgrade ourselves.
Knowing the high need for performance, what would you recommend for 128GB SSD and 16GB memory?
The laptops are all coming with 7200 RPM drives for storage so the SSD is mainly for programs and the OS.
For the drive bay, does this part sound right to you? http://shop.lenovo.com/us/itemdetails/0A65623/460/​89555ADB1CE946DA80E0E5D6FE77B164
This would be used for the HDD and the SSD would be moved to main HDD location.
Is there anything else we should know about these laptops?
Thank you in advanced for your time and all suggestions are welcomed.
Cheers,
Chris
Solved!
Go to Solution.

I installed 32GB (4x8GB) of Corsair Vengeance RAM, and it has been working wonderfully since day 1.
Thinkpad W530, i7-3720QM, 1920x1080 screen, 32GB RAM, dual SSDs (Samsung 830, Crucial M4 mSATA), Quadro K2000M, 9-cell battery, DVD burner, backlit keyboard, Bluetooth, Intel 6300 wireless card

Similar Messages

  • Disk configuration and workflow help needed for lab video workstation

    Hi All,
    Setting up a video editing workstation for a research lab that will use Premeire to edit AVCHD Progressive clips (sometimes with 2 streams side-by-side, but usually single-camera) and export them to .mp4 for later viewing by video coders. We won't be using AfterEffects or adding anything to the videos other than some text (titles, maybe sub-titles).
    The other purpose of this workstation is to act as a file server and backup system for other machines in the lab. Coders will be viewing the exported videos via other networked machines and working with Microsoft Office files that will be stored on the workstation's other HDDs. I'll have a physical backup drive and cloud backup via CrashPlan.
    I've built a machine that is probably overkill, but the client (my wife) wanted it to be "fast," and the purpose of the machine might change in the future:
    i7-4770K (overclocked a bit)
    16GB RAM
    Asus Z87-Pro
    GeForce GTX 660
    I have the OS (W7) and programs on a 256 GB Samsung 840 Pro SSD and currently have two 1TB Velociraptors to use for the Premiere workflow. I'm trying to figure out how to proceed with the purchase of the rest of the drives, and I want to keep the Premiere drives separate from the large storage drives from the lab that are networked and synced to cloud backup.
    Following the recommendations for a three-disc configuration I've picked up on these forums, I could set it up like this:
    C: (256GB SSD) (OS, programs, pagefile)
    D: (1TB HDD) (media, projects)
    E: (1TB HDD) (previews, media cache, exports)
    F: (4TB HDD) (backups of media, projects, and exports and storage of other research files)*THIS DRIVE WOULD BE SHARED ON THE NETWORK
    G: (4TB external HDD) (backup of F & drive that backs up to CrashPlan)
    but it seems that would be a waste of the speed of the second 10k velociraptor. If I added another SSD and RAIDed the Velociraptors it would be:
    C: (256GB SSD) (OS, programs)
    D: (Two 1TB Velociraptors in RAID 0) (media, projects)
    E: (256GB SSD) (media cache, pagefile)
    but would I then need to add another dedicated HDD for previews and exports, or could I store those on the networked F: from above (which would be previews, exports, backups of media and projects, and storage of other research files) without taking a speed hit?
    It seems overkill to have a dedicated drive for exports and previews (let's make that the new F:), then have them copy to the first 4TB drive (now G:), then back that up to the second 4TB drive (now H:), then back that up to CrashPlan. However, people might be accessing that network drive at any time, and I don't want that to slow any part of the video process down.
    I appreciate any advice ya'll can give me!

    Hi Jim,
    Thanks for the encouraging response. I'm leaning toward the non-SSD option at this point. 
    To make sure I understand, are you suggesting I try using the Velociraptor Raid 0 in the 2 disk configuration suggested by Harm's Guidelines for Disk Usage chart? Like this:
    C: (256 GB SSD) (OS, Programs, Pagefile, Media Cache)
    D: (1TB x2 in RAID 0) (Media, Projects, Previews, Exports)?
    Where I'm still confused there, and in looking at Harm's array suggestions for 5 or more drives, is how performance is affected by having simultaneous read/write operations happening on the same drive, which is what I understood was the reason for spreading out the files on multiple drives. Maybe I don't understand how Premiere's file operations work in practice, or maybe I don't understand RAID 0 well enough.
    In the type of editing we'll be doing (minimal) aren't there still times when Premiere will be trying to read and write from the D: drive at the same time, for example during export? Wouldn't the increased speed benefits of RAID 0 for either read or write alone be defeated by asking the array to do both simultaneously?
    Maybe the reason the Media Cache is on the SSD in the above configuration is because that is what will be read while writing to something like Exports? But that wouldn't make sense given Harm's chart, which has the Media Cache also located on the array....
    Another question is, given that the final home of the exported videos will be on the big internal drive (4TB) anyway, could I set it up like this:
    C: (SSD) (OS, Programs, Pagefile, Media Cache)
    D: (2TB RAID 0) (Media, Projects, Previews)
    E: (network shared 4TB HDD) (Exports + a bunch of other shared non-video files)
    so I don't end up having to copy the exported videos over to the 4TB drive? Do you think it would render significantly faster to the RAID than it would to the 7200 rpm 4TB drive? I'd like to cut out the step of copying exported videos from D: to E: all the time if it wasn't necessary.
    Thanks again.

  • Help needed for high res windows compatible

    Hi guys,
    Could really do with some help. I'm failry new to video editing and I have been asked to supply a high res movie that is compatible in windows. Are there any settings in Compressor that allow you to do this? My movie is currently in FCP 6, its 1440 x 1080.
    If there is a setting that allows a high res format in both mac and windows, that would be even better!
    Any help would be appreciated!

    H.264 with the data between 1500 and 2000 should give you great results.
    Unless the PC people don't have QT this should never be an issue.
    If people don't need full res you may want to go down to 1280x720.
    If your show was really long file size could become an issue but a 10 minute show with AAC 128 audio should come in under 100MB at 1280x720.

  • Color management help needed for adobe CS5 and Epson printer 1400-Prints coming out too dark with re

    Color management help needed for adobe CS5 and Epson printer 1400-Prints coming out too dark with reddish cast and loss of detail
    System: Windows 7
    Adobe CS5
    Printer: Epson Stylus Photo 1400
    Paper: Inkjet matte presentation paper with slight luster
    Installed latest patch for Adobe CS5
    Epson driver up to date
    After reading solutions online and trying them for my settings for 2 days I am still unable to print what I am seeing on my screen in Adobe CS5. I calibrated my monitor, but am not sure once calibration is saved if I somehow use this setting in Photoshop’s color management.
    The files I am printing are photographs of dogs with lots of detail  I digitally painted with my Wacom tablet in Photoshop CS5 and then printed with Epson Stylus 1400 on inkjet paper 20lb with slight luster.
    My Printed images lose a lot of the detail & come out way to dark with a reddish cast and loss of detail when I used these settings in the printing window:
    Color Handling: Photoshop manages color, Color management -ICM, OFF no color adjustment.
    When I change to these settings in printer window: Color Handling:  Printer manages color.  Color management- Color Controls, 1.8 Gamma and choose Epson Standard it prints lighter, but with reddish cast and very little detail and this is the best setting I have used so far.
    Based on what I have read on line, I think the issue is mainly to do with what controls are set in the Photoshop Color Settings window and the Epson Printer preferences. I have screen images attached of these windows and would appreciate knowing what you recommend I enter for each choice.
    Also I am confused as to what ICM color management system to use with this printer and CS5:
    What is the best ICM to use with PS CS5 & the Epson 1400 printer? Should I use the same ICM for both?
    Do I embed the ICM I choose into the new files I create? 
    Do I view all files in the CS5 workspace in this default ICM?
    Do I set my monitor setting to the same ICM?
    If new file opens in CS5 workspace and it has a different embedded profile than my workspace, do I convert it?
    Do I set my printer, Monitor and PS CS5 color settings to the same ICM?
    Is using the same ICM for all devices what is called a consistent workflow?
    I appreciate any and all advice that can be sent my way on this complicated issue. Thank you in advance for your time and kind help.

    It may be possible to figure out by watching a Dr.Brown video on the subject of color printing. Adobe tv
    I hope this may help...............

  • Help needed for writing query

    help needed for writing query
    i have the following tables(with data) as mentioned below
    FK*-foregin key (SUBJECTS)
    FK**-foregin key (COMBINATION)
    1)SUBJECTS(table name)     
    SUB_ID(NUMBER) SUB_CODE(VARCHAR2) SUB_NAME (VARCHAR2)
    2           02           Computer Science
    3           03           Physics
    4           04           Chemistry
    5           05           Mathematics
    7           07           Commerce
    8           08           Computer Applications
    9           09           Biology
    2)COMBINATION
    COMB_ID(NUMBER) COMB_NAME(VARCHAR2) SUB_ID1(NUMBER(FK*)) SUB_ID2(NUMBER(FK*)) SUB_ID3(NUMBER(FK*)) SUBJ_ID4(NUMBER(FK*))
    383           S1      9           4           2           3
    384           S2      4           2           5           3
    ---------I actually designed the ABOVE table also like this
    3) a)COMBINATION
    COMB_ID(NUMBER) COMB_NAME(VARCHAR2)
    383           S1
    384           S2
    b)COMBINATION_DET
    COMBDET_ID(NUMBER) COMB_ID(FK**) SUB_ID(FK*)
    1               383          9
    2               383          4
    3               383          2
    4               383          3
    5               384          4
    6               384          2          
    7               384          5
    8               384          3
    Business rule: a combination consists of a maximum of 4 subjects (must contain)
    and the user is less relevant to a COMB_NAME(name of combinations) but user need
    the subjects contained in combinations
    i need the following output
    COMB_ID COMB_NAME SUBJECT1 SUBJECT2      SUBJECT3      SUBJECT4
    383     S1     Biology Chemistry      Computer Science Physics
    384     S2     Chemistry Computer Science Mathematics Physics
    or even this is enough(what i actually needed)
    COMB_ID     subjects
    383           Biology,Chemistry,Computer Science,Physics
    384           Chemistry,Computer Science,Mathematics,Physics
    you can use any of the COMBINATION table(either (2) or (3))
    and i want to know
    1)which design is good in this case
    (i think SUB_ID1,SUB_ID2,SUB_ID3,SUB_ID4 is not a
    good method to link with same table but if 4 subjects only(and must) comes
    detail table is not neccessary )
    now i am achieving the result by program-coding in C# after getting the rows from oracle
    i am using oracle 9i (also ODP.NET)
    i want to know how can i get the result in the stored procedure itsef.
    2)how it could be designed in any other way.
    any help/suggestion is welcome
    thanks for your time --Pradeesh

    Well I forgot the table-alias, here now with:
    SELECT C.COMB_ID
    , C.COMB_NAME
    , (SELECT SUB_NAME
    FROM SUBJECTS
    WHERE SUB_ID = C.SUB_ID1) AS SUBJECT_NAME1
    , (SELECT SUB_NAME
    FROM SUBJECTS
    WHERE SUB_ID = C.SUB_ID2) AS SUBJECT_NAME2
    , (SELECT SUB_NAME
    FROM SUBJECTS
    WHERE SUB_ID = C.SUB_ID3) AS SUBJECT_NAME3
    , (SELECT SUB_NAME
    FROM SUBJECTS
    WHERE SUB_ID = C.SUB_ID4) AS SUBJECT_NAME4
    FROM COMBINATION C;
    As you need exactly 4 subjects, the columns-solution is just fine I would say.

  • Help need for force to signout All session ! how...

    hi
         help need for force to  signout All session !  how ??
    Solved!
    Go to Solution.

    Hi and welcome to the Skype Community,
    To force a signout of all instances your Skype is signed into please change your password: https://support.skype.com/en/faq/FA95/how-do-i-change-my-password
    Follow the latest Skype Community News
    ↓ Did my reply answer your question? Accept it as a solution to help others, Thanks. ↓

  • File missing (file\BCD error code 0Xc0000034 help need for work!

    file missing (file\BCD  error code 0Xc0000034 help need for work!    what can i do?
    have an p 2000 notebook pc

     Hi bobkunkle, welcome to the HP Forums. I understand you cannot boot passed the error you are receiving.
    What is the model or product number of your notebook? What version of Windows is installed?
    Guide to finding your product number
    Which Windows operating system am I running?
    TwoPointOh
    I work on behalf of HP
    Please click “Accept as Solution ” if you feel my post solved your issue, it will help others find the solution.
    Click the “Kudos, Thumbs Up" on the bottom to say “Thanks” for helping!

  • I work in a High School and I am looking for a new library system that runs on the mac and is not windows based, can anyone recommend anything?

    I work in a High School and I am looking for a new library system that runs on the mac and is not windows based, can anyone recommend anything?

    That's a very broad question and difficult to answer without knowing more about you requirements.
    Try starting on this page Category review: library management software for the Mac |Part I  and part II  to get a starting point.
    regards

  • How to find the current CPU and Memory (RAM) allocation for OMS and Reposit

    Hi There,
    How do I check the CPU and memory (RAM) allocation for the OMS and the Repository database? I'm following the "Oracle Enterprise Manager Grid Control Installation and Configuration Guide 10g Release 5 (10.2.0.5.0)" documentation and it says to ensure the following:
    Table 3-1 CPU and Memory Allocation for Oracle Management Service
    Deployment Size Host CPU/Host Physical Memory (RAM)/Host Total Recommended Space
    Small (100 monitored targets)                   1                                             1 (3 GHz)                            2 GB                                                                        2 GB
    ***Table 3-2 CPU and Memory Allocation for Oracle Management Repository***
    Deployment Size Host CPU/Host Physical Memory (RAM)/Host Total Recommended Space
    Small (100 monitored targets)                   1                                   1 (3 GHz)                                        2 GB                                                                          10 GB
    Thanks,
    J

    Hi J,
    This is the minimum requirement. However It will work fine.
    Also read below article on "Oracle Enterprise Manager Grid Control Architecture for Very Large Sites"
    http://www.oracle.com/technology/pub/articles/havewala-gridcontrol.html
    For GRID HA solution implementation please read :
    http://www.oracle.com/technology/deploy/availability/pdf/MAA_WP_10gR2_EnterpriseManagerBestPractices.pdf
    Regards
    Rajesh

  • I need a refund for storage charges I bought and did not need for iCloud!

    I need a refund for storage charges I bought and did not need for iCloud!

    You only have 15 days to cancel and get a refund.
    In the UK click on this link:
    HT4874 iCloud Storage Upgrade refund

  • [Mostly Sorted] Extracting tags - regexp_substr and count help needed!

    My original query got sorted, but additional regexp_substr and count help is required further on down!
    Hi,
    I have a table on a 10.2.0.3 database which contains a clob field (sql_stmt), with contents that look something like:
    SELECT <COB_DATE>, col2, .... coln
    FROM   tab1, tab2, ...., tabn
    WHERE tab1.run_id = <RUNID>
    AND    tab2.other_col = '<OTHER TAG>'(That's a highly simplified sql_stmt example, of course - if they were all that small we'd not be needing a clob field!).
    I wanted to extract all the tags from the sql_stmt field for a given row, so I can get my (well not "mine" - I'd never have designed something like this, but hey, it works, sorta, and I'm improving it as and where I can!) pl/sql to replace the tags with the correct values. A tag is anything that's in triangular brackets (eg. <RUNID> from the above example)
    So, I did this:
    SELECT     SUBSTR (sql_stmt,
                       INSTR (sql_stmt, '<', 1, LEVEL),
                       INSTR (substr(sql_stmt, INSTR (sql_stmt, '<', 1, LEVEL)), '>', 1, 1)
                       ) tag
    FROM       export_jobs
    WHERE      exp_id =  p_exp_id
    CONNECT BY LEVEL <= (LENGTH (sql_stmt) - LENGTH (REPLACE (sql_stmt, '<')))Which I thought would be fine (having tested it on a text column). However, it runs very poorly against a clob column, for some reason (probably doesn't like the substr, instr, etc on the clob, at a guess) - the waits show "direct path read".
    When I cast the sql_stmt as a varchar2 like so:
    with my_tab as (select cast(substr(sql_stmt, instr(sql_stmt, '<', 1), instr(sql_stmt, '>', -1) - instr(sql_stmt, '<', 1) + 1) as varchar2(4000)) sql_stmt
                    from export_jobs
                    WHERE      exp_id = p_exp_id)
    SELECT     SUBSTR (sql_stmt,
                       INSTR (sql_stmt, '<', 1, LEVEL),
                       INSTR (substr(sql_stmt, INSTR (sql_stmt, '<', 1, LEVEL)), '>', 1, 1)
                       ) tag
    FROM       my_tab
    CONNECT BY LEVEL <= (LENGTH (sql_stmt) - LENGTH (REPLACE (sql_stmt, '<')))it runs blisteringly fast in comparison, except when the substr'd sql_stmt is over 4000 chars, of course! Using dbms_lob instr and substr etc doesn't help either.
    So, I thought maybe I could find an xml related method, and from this link:get xml node name in loop , I tried:
    select t.column_value.getrootelement() node
      from (select sql_stmt xml from export_jobs where exp_id = 28) xml,
    table (xmlsequence(xml.xml.extract('//*'))) tBut I get this error: ORA-22806: not an object or REF. (It might not be the way to go after all, as it's not proper xml, being as there are no corresponding close tags, but I was trying to think outside the box. I've not needed to use xml stuff before, so I'm a bit clueless about it, really!)
    I tried casting sql_stmt into an xmltype, but I got: ORA-22907: invalid CAST to a type that is not a nested table or VARRAY
    Is anyone able to suggest a better method of trying to extract my tags from the clob column, please?
    Message was edited by:
    Boneist

    I don't know if it may work for you, but I had a similar activity where I defined sql statements with bind variables (:var_name) and then I simply looked for witch variables to bind in that statement through this query.
    with x as (
         select ':var1
         /*a block comment
         :varname_dontcatch
         select hello, --line comment :var_no
              ''a string with double quote '''' and a :variable '',  --:variable
              :var3,
              :var2, '':var1'''':varno'',
         from dual'     as string
         from dual
    ), fil as (
         select string,
              regexp_replace(string,'(/\*[^*]*\*/)'||'|'||'(--.*)'||'|'||'(''([^'']|(''''))*'')',null) as res
         from x
    select string,res,
         regexp_substr(res,'\:[[:alpha:]]([[:alnum:]]|_)*',1,level)
    from fil
    connect by regexp_instr(res,'\:[[:alpha:]]([[:alnum:]]|_)*',1,level) > 0
    /Or through these procedures
         function get_binds(
              inp_string in varchar2
         ) return string_table
         deterministic
         is
              loc_str varchar2(32767);
              loc_idx number;
              out_tab string_table;
         begin
              --dbms_output.put_line('cond = '||inp_string);
              loc_str := regexp_replace(inp_string,'(/\*[^*]*\*/)'||'|'||'(--.*)'||'|'||'(''([^'']|(''''))*'')',null);
              loc_idx := 0;
              out_tab := string_table();
              --dbms_output.put_line('fcond ='||loc_str);
              loop
                   loc_idx := regexp_instr(loc_str,'\:[[:alpha:]]([[:alnum:]]|_)*',loc_idx+1);
                   exit when loc_idx = 0;
                   out_tab.extend;
                   out_tab(out_tab.last) := regexp_substr(loc_str,'[[:alpha:]]([[:alnum:]]|_)*',loc_idx+1);
              end loop;
              return out_tab;
         end;
         function divide_string (
              inp_string in varchar2
              --,inp_length in number
         --return string_table
         return dbms_sql.varchar2a
         is
              inp_length number := 256;
              loc_ind_1 pls_integer;
              loc_ind_2 pls_integer;
              loc_string_length pls_integer;
              loc_curr_string varchar2(32767);
              --out_tab string_table;
              out_tab dbms_sql.varchar2a;
         begin
              --out_tab := dbms_sql.varchar2a();
              loc_ind_1 := 1;
              loc_ind_2 := 1;
              loc_string_length := length(inp_string);
              while ( loc_ind_2 < loc_string_length ) loop
                   --out_tab.extend;
                   loc_curr_string := substr(inp_string,loc_ind_2,inp_length);
                   dbms_output.put(loc_curr_string);
                   out_tab(loc_ind_1) := loc_curr_string;
                   loc_ind_1 := loc_ind_1 + 1;
                   loc_ind_2 := loc_ind_2 + length(loc_curr_string);
              end loop;
              dbms_output.put_line('');
              return out_tab;
         end;
         function execute_statement(
              inp_statement in varchar2,
              inp_binds in string_table,
              inp_parameters in parametri
         return number
         is
              loc_stat dbms_sql.varchar2a;
              loc_dyn_cur number;
              out_rows number;
         begin
              loc_stat := divide_string(inp_statement);
              loc_dyn_cur := dbms_sql.open_cursor;
              dbms_sql.parse(c => loc_dyn_cur,
                   statement => loc_stat,
                   lb => loc_stat.first,
                   ub => loc_stat.last,
                   lfflg => false,
                   language_flag => dbms_sql.native
              for i in inp_binds.first .. inp_binds.last loop
                   DBMS_SQL.BIND_VARIABLE(loc_dyn_cur, inp_binds(i), inp_parameters(inp_binds(i)));
                   dbms_output.put_line(':'||inp_binds(i)||'='||inp_parameters(inp_binds(i)));
              end loop;
              dbms_output.put_line('');
              --out_rows := DBMS_SQL.EXECUTE(loc_dyn_cur);
              DBMS_SQL.CLOSE_CURSOR(loc_dyn_cur);
              return out_rows;
         end;Bye Alessandro
    Message was edited by:
    Alessandro Rossi
    There is something missing in the functions but if there is something that may interest you you can ask.

  • HELP NEEDED FOR MAC OS X PANTHER INSTALLATION

    I'm trying to install mac os x panther (cd) on my powerbook g3 model M5343
    because i keep getting a quote"unexpeted error code (exit code 0)" it happens after i agree to the license and before i select the hard drive volume to install to im hoping i can get some help
    p.s mac os 9.2.2 boot and runs perfectly because im using it to post this
    Thanks again

    Hello, and welcome to Apple Support Discussions!
    Having noticed you've posted the same question twice, once here, as well as
    here: http://discussions.apple.com/message.jspa?messageID=10875742#
    that gives people two chances to see if they can get it right.
    Sometimes, issues such as remaining hard disk drive free space may come
    into play when trying to install extra systems, update or more apps in a HDD,
    that may be part of the problem in this instance. A 10GB HDD is not adequate.
    Sure, I had an early white iBook 12" 500MHz with 576MB RAM & 10GB HDD;
    for a time, it ran OK with OS 9.2.2 & Panther 10.3.9. (The OS 9.2.2 ran great.)
    A lack of free space in the drive, and OS X demands for RAM along with its
    automatic need to create Virtual Memory in hard drive free-space, made it run
    a bit slow. A later cleaned up installation of Panther 10.3.9 without OS 9.2.2
    did better; the working system plus some additional applications took 6GB.
    A decent sized hard disk drive of at least 40GB capacity, and sufficient chip
    RAM of as much as that computer's specifications indicate it can handle,
    are both recommendations prior to an upgrade to an OS X version.
    Hard disk drive format for OS X should be HFS+, not just HFS, for Panther.
    The journaling should be turned on for Panther. In some cases, to do OS X
    right, a new installation may be required; due to space and format issues.
    To set up the hard disk drive using the booted OS X Installer's version of
    Disk Utility, is the method to assure the hard drive is setup correctly; but that
    would erase the content of the hard disk drive.
    If the formatting is correct and there is enough free space on the drive to
    initiate and complete the install, then maybe it could work without a full
    new installation on an erased (disk utility option, overwrite zeros/reformat)
    for awhile, at least. That old of a hard disk drive is likely at the end of service.
    Presupposing the OS X 10.3.x install disc set is not from any other computer,
    and knowing it should be a retail full install disc and not an upgrade-only set,
    do you have the version number and information off the Panther installer discs
    to share here, in case there is a discrepancy in this matter?
    There are several reasons why it won't go past a certain point in the process of
    installation. If there is a problem on the low-level of the hard disk drive, maybe
    a sector error or damage, then the drive would need to be repaired; this may
    also include the steps outlined generally above; where the drive would be wiped
    and overwritten with zeros, reformatted, and so on, and be sure OS 9 Drivers
    were installed if you start out with OS X install disc first, otherwise OS9 won't boot.
    Some of the older computers do so well with OS 9.2.2, since it really does a lot
    with very little resources; while OS X demands much more, including top quality
    RAM and plenty of hard disk drive free space. The old ATA-2 HDD hardware
    and a ceiling of only 512MB RAM limits that this can do.
    You could probably see if the booted OS X installer's Disk Utility (accessed from
    drop-down menu bar in Installer header) can use its Disk First Aid, to Repair Disk.
    Could be something minor. However, now that some extra stuff is on that hard disk
    drive, you should check and see how much remaining free space remains there.
    This is critical and could compromise the entire operation as well as the OS 9 run.
    Good luck & happy computing!
    { edited }

  • MySQL has run out of memory ::Help needed::

    ::Help needed::
    I've created a PHP web application in Dreamweaver, which uses a MySQL database, containing 14 tables.
    On one page, I use a an SQL query to select data from 10 of the tables in the database.
    However, when I try to preview the page in a browser, a PHP warning stating that the MySQL engine has run out of memory.
    Is there a way of increasing the Memory Cache of the engine, or a way to optimize the performance?

    Is this happening locally?
    If it is, try rebooting your system and see if this fixes the problem. If not then you have a problem with your code. If it works locally but not on the server, then you know it's not something in your code causing the issue, so you can confidently go to your host support and have them sort it out.
    With any such situation, testing locally first is a vital debugging step.
    Hope this gives you a path to follow.
    Lawrence   *Adobe Community Expert*
    www.Cartweaver.com
    Complete Shopping Cart Application for
    Dreamweaver, available in ASP, PHP and CF
    www.twitter.com/LawrenceCramer

  • Query help needed for querybuilder to use with lcm cli

    Hi,
    I had set up several queries to run with the lcm cli in order to back up personal folders, inboxes, etc. to lcmbiar files to use as backups.  I have seen a few posts that are similar, but I have a specific question/concern.
    I just recently had to reference one of these back ups only to find it was incomplete.  Does the query used by the lcm cli also only pull the first 1000 rows? Is there a way to change this limit somwhere?
    Also, since when importing this lcmbiar file for something 'generic' like 'all personal folders', pulls in WAY too much stuff, is there a better way to limit this? I am open to suggestions, but it would almost be better if I could create individual lcmbiar output files on a per user basis.  This way, when/if I need to restore someone's personal folder contents, for example, I could find them by username and import just that lcmbiar file, as opposed to all 3000 of our users.  I am not quite sure how to accomplish this...
    Currently, with my limited windows scripting knowledge, I have set up a bat script to run each morning, that creates a 'runtime' properties file from a template, such that the lcmbiar file gets named uniquely for that day and its content.  Then I call the lcm_cli using the proper command.  The query within the properties file is currently very straightforward - select * from CI_INFOOBJECTS WHERE SI_ANCESTOR = 18.
    To do what I want to do...
    1) I'd first need a current list of usernames in a text file, that could be read (?) in and parsed to single out each user (remember we are talking about 3000) - not sure the best way to get this.
    2) Then instead of just updating the the lcmbiar file name with a unique name as I do currently, I would also update the query (which would be different altogether):  SELECT * from CI_INFOOBJECTS where SI_OWNER = '<username>' AND SI_ANCESTOR = 18.
    In theory, that would grab everything owned by that user in their personal folder - right? and write it to its own lcmbiar file to a location I specify.
    I just think chunking something like this is more effective and BO has no built in back up capability that already does this.  We are on BO 4.0 SP7 right now, move to 4.1 SP4 over the summer.
    Any thoughts on this would be much appreciated.
    thanks,
    Missy

    Just wanted to pass along that SAP Support pointed me to KBA 1969259 which had some good example queries in it (they were helping me with a concern I had over the lcmbiar file output, not with query design).  I was able to tweak one of the sample queries in this KBA to give me more of what I was after...
    SELECT TOP 10000 static, relationships, SI_PARENT_FOLDER_CUID, SI_OWNER, SI_PATH FROM CI_INFOOBJECTS,CI_APPOBJECTS,CI_SYSTEMOBJECTS WHERE (DESCENDENTS ("si_name='Folder Hierarchy'","si_name='<username>'"))
    This exports inboxes, personal folders, categories, and roles, which is more than I was after, but still necessary to back up.. so in a way, it is actually better because I have one lcmbiar file per user - contains all their 'personal' objects.
    So between narrowing down my set of users to only those who actually have saved things to their personal folder and now having a query that actually returns what I expect it to return, along with the help below for a job to clean up these excessive amounts of promotion jobs I am now creating... I am all set!
    Hopefully this can help someone else too!
    Thanks,
    missy

  • Ideas or help needed for a simple, robust pluggable framework

    Hi all,
    Having written a fairly decent plugin engine, similar in concept to the Eclipse plugin engine, although at a more generic scale, I am looking for any possible ideas for a Java Swing framework that is built around the engine, with the concept of using a framework that is built on mostly plugins. My engine handles, or will soon handle, a number of features to make the engine robust enough, yet still easy enough, to use for just about any purpose.
    The engine is pretty simple, although with a bit more work I feel will be overall a pretty robust and powerful plugin engine. Each plugin is made up of one or more "services". A plugin is a .jar file that contains a plugin-conf.xml config file, the classes that implement the Service interface, and any supporting classes. The "plugin" is really the package of one or more services and supporting classes. The engine will handle the ability to work with expanded dir structures as well, so that the build process doesn't have to create .jar files on every build of a plugin. The engine has built in support to load, unload and reload a plugin at runtime. This helps during development by allowing auto-reload of a plugin service without having to restart the app. The engine has the ability to "watch" URLs in a separate thread (still working on this), and at given intervals if a change occurs to any plugin, that plugin is reloaded. This is configurable on a per plugin basis in the config file.
    Every plugin .jar file gets its own classloader instance. Because of the nature of a framework that may rely heavily on plugins, it will be very common to have plugin dependencies, where a plugin service may rely on one or more other plugin services. The dependencies are configured in the plugin-conf.xml file, and the engine resolves these when the plugin is loaded, automatically. Once all plugins have been loaded, an "init" call is made that then goes and resolves all plugin service dependencies, setting up the behind the scenes work to make sure any service can use any other service it defines to depend on. Another area is plugin versions. There will no doubt be a time when some sort of application may have legacy plugins, but also have newer plugins. For example, an application built on a "core" set of plugins, may eventually update the core plugins with newer versions. The engine allows the "old" plugins to exist and work while new versions of the same plugins may be loaded and working at the same time. This allows older plugins that depend on the old set of core plugins to work, while newer plugins that depend on the new core plugins may work also. Any plugin may depend on one or more services specified by specific versions, or a range of versions.
    Plugin services can define to be created when first loaded, or lazy instantiated. Ideally, an application would opt for lazy instantiation until a plugin is needed. For example, a number of plugins may need to add menu items or buttons that would trigger its service. The plugin does not actually need to be created until the menu or button is clicked on. There is one BIG problem with how this engine works though. Unlike the Eclipse (and other) engines where the config file defines the menu item(s), buttons, etc in an xml sort of language, this engine is built for generic use, and therefore is not specific to menu items or buttons triggering a service instantiation. Therefore, a little "hack" is required. A specific plugin that is created when first loaded will be required to set up all the menu items for specific plugins, then handle the actionPerformed() call to instruct the engine to create the service. The next step would be for the plugin service to add its own handler to the specific menu item it depends on, and remove the "old" handler the startup plugin added to it to handle the initial click. Another thought just struck me though. Because the engine must use an XML parser to load every plugin-conf.xml file, it might be possible to "extend" the parsing routine, where by an extending class could be added to the engine to parse plugin-conf.xml files. First the plugin engines own routine would parse it. Then, the extending class could parse for any extra plugin-conf.xml info, such as menu item settings, and directly set up the menu items and handlers in this manner. I will probably include this ability directly in the engine soon anyway, so that nobody else has to do this, but this is one area I would appreciate some feedback on.
    Anyway, so that is the jist of the engine. There is more to it under the hood, but that sums up a good part of it. Now, the pluggable framework, much like what the "shell" of eclipse, forte and so forth offer, is built around my engine to make it very easy to build Swing applications with a pluggable framework underneath. The idea is to package up a startup main class that is configurable, a number of useful plugins that other plugins could depend on, such as an Outlook layout, menuing, toolbars, drag/drop, history, undo/redo, macro record, open/save/search/find/replace dialogs, and so forth. This isn't just for an IDE though. The developer using the framework could deploy the basic app with the plugins of his/her choice, and add to it with his/her own plugins.
    Soooo, after this long post, what I am getting at is if anyone would be interested in helping out with ideas, feedback, testing, core framework plugins, and so forth. At this time I am keeping the code closed, but will probably public domain it, open source it, or whatever. The finished framework should make it easy for anyone to quickly build useable applications, and if all goes well, I'd like to set up a site with a location for 3rd party plugins to be uploaded, for download, comments, etc. Being a web developer, I myself will probably work on some plugins for Web Services, web stress testing, and so forth. I have lots of ideas for useable plugins.
    On that note, one application I am personally working on for my own use, is a simple yet possibly robust internet suite of apps. I want to incorporate FTP, Email, NewsGroup, and IRC/AOL IM/Yahoo IM/MSN IM/ICQ chat into a single app. Every aspect of it would be plugins. Frankly, I hate outlook, Eudora is alright, but I want to do some things with the email app. I also want a single IM/Chat app that can talk with all protocols (not an easy task, take a look at GAIM). Newsgroups are handy to work with for developers and others of interest, as is FTP. But even more so, being able to have all in one big application framework that allows them to share data between each other, work with one another, and so forth is appealing to me, and being written in Java it could potentially work on many platforms, giving some platforms a possible nice set of internet apps to use. Being able to send an email to a mailing list AND have it posted to specific newsgroups at the same time without having to copy/paste, open up separate applications and so forth has appeal. Directly emailing from any chat or newsgroup link without another app starting up is a little faster as well. Those are just "small" things that could prove to be very kewl in a complete internet app. Adding a web browser, well, I don't think I want to go that route. But if there is already a decent Java built web browser, it shouldn't be too hard to add it as a plugin.
    So, if anyone is interested, by all means, drop a post to this thread, let me know of interest, feedback, ideas, point out bad things, and so forth. I appreciate all forms of communication.
    Thanks.

    Yes I do. I am using it now with my work related project.
    I am in fact reworking the engine a bit now. I want to incorporate the notion of services (like OSGi) where by a plugin can register services. These services are "global" in scope, meaning any plugin may request the use of a service. However, services, unlike plugins, are not guaranteed to be available. Therefore, plugins using services must be coded to properly handle this possibility. As an example, imagine an email application using my engine. One plugin may provide the email gateway, including the javamail .jar library and provide the email service. Other plugins, such as the one that provides the functionality for the SEND button, would "use" this service. At runtime, when the send button was pressed it would ask the engine for the email service. If available, off goes the email. If not, it could pop up a dialog indicating some sort of message that the email service is not available.
    I am at the VERY beginning stages in this direction so I'd love to have ideas, thoughts, suggestions as to how this might be implemented. I do believe though that it will provide for a more powerful engine. The nice thing is, while the engine will support static runtime plugins, it will also support dynamic services that can come and go during the runtime. The key is that plugins using services do not maintain references to them, but instead query the engine each time a plugin needs to use a service.
    Static plugins are those that are guaranteed to be available or if not, any dependent plugin is not allowed to load. That is, if A depends on B and B is not able to be loaded, A is unloaded as well as it can't perform its job without B; it depends on B in some manner to complete its function. Imagine a plugin adding an option panel to the Preferences page only that the Preferences plugin is not loaded. It just can't work. However, with some work, there could be variations on this. That is, a plugin may provide a menu item as well as a preferences page. If the preference plugin is not available, then the plugin may simply still work via the menu item, but have no preferences panel available. This should be configurable via the plugin-conf.xml config file. However, as I have it now, using extension points and extensions like Eclipse does, it is also possible that if the Preferences plugin isn't loaded, it wont look for ANY extensions extending its extensino point, and therefore the plugins could all still run but there would simply be no preferences page. So, I am not entirely sure yet which way is best for this to work.
    My engine, as it stands now, allows for separate classloader plugin loading, it automatically resolves all dependencies by creating the plugin registry each time the engine is started up. To speed up plugin loading, it maintains a plugins.xml file in the root dir that keeps track of each plugin that was loaded and its last timestamp. Plugins can be open directory files or jarred up into .PAR files (think .WAR or .EAR files). The engine can find .par or open-dir plugins in multiple locations (including URL locations for direct .par files). When it finds a .par file, it first decompresses the .par file to a plugin work directory. Every plugin must have a plugin-conf.xml in its root dir, and either a /classes dir where compiled classes are, or a .jar file in the root path of the plugin, where the /classes dir superscedes the .jar file. Alternatively, anything in a /lib dir is automatically picked up as part of the plugin classpath. So a plugin that wraps the xerces.jar file can simply place the xerces.jar in the /lib dir and automatically present the xerces library to all dependent plugins (which can import the xerces classes but not need to distribute the xerces.jar file if a plugin they depend on has it in its /lib dir). The "parent lookup" process goes only one parent level deep. That is, if plugin A depends on a class in a /lib/*.jar file in plugin B, then the engine will resolve the class (through delegation) of plugin B. But if A depends on B, B depends on C where plugin C's /lib/*.jar file contains a class A is looking to use, this will not work and A will throw a ClassNotFoundException. In other words, the parent lookup only goes as far as the classpath of all dependent plugins, not up the chain of all dependent plugins. Eclipse allows each plugin to "export" various classes, or packages, or entire .jar files and the lookup can go all the way up the chain if need be. I haven't yet found a big reason for supporting this, so I am not too concerned with that at this point. The engine does support reloadable plugins although I have not yet implemented it. Because each plugin information object is stored in a Map keyed on the plugins GUID (found in the plugin-conf.xml file), it is easy enough to load a new plugin (since they get their own classloader) and replace the object at the GUID key and now have a reloaded plugin. The harder part is properly notifying all dependent plugins of the reload and what to do with them. Therefore I have not quite yet implemented this feature although the first step can easily be done, so long as nobody minds the "remnants" of older plugins laying around and possibly not being garbage collected.
    All of this works now, and I am using it. I do NOT have a generic UI framework just yet. I am working on that now. Eclipse has a very nice feature in that every plugin.xml file builds up the UI without any plugin code ever being created or ran. I am working on something like that now, although I am focussed more on the aspect of the engine at this point.
    Two things keep me going. First, the shear fun of working on this and seeing it succeed, even if a little bit. Second, while I love the idea of Eclipse, OSGi and other engines, so far I have yet to find one that is very easy to write plugins for, is very small, and is "generic" enough for any use. Some may argue JBoss core, at 29K can do this. I don't know if it can. It is built around JMX and I don't know that I agree JMX is the "ultimate" core plugin engine for all types of apps. Not that mine is either, but I'd like to see what I am working on become that if possible. Currently, with an xml parser (www.xmlpull.org) added as part of the code, my engine is about 40K with debug info, maybe about 28K without. I expect it to grow a bit more with services, reloadable/unloadable code, and some other stuff. However, I am thinking it will still be around 50K in size and in my opinion, with an xml read/write parser (very fast one at that), extension/extensino points, services, dependencies, multiple versions of plugins (soon), load/unload/reload capabilities, .par management (unjar into work dir, download .par files from urls, etc) and open directory capabilities, inidividual classloaders, automatic dependency resolution, dynamic dependency resolution and possibly even more, I think what my engine offers (and will offer) is pretty cool in my book.
    None the less, there is always room for improvement. One of the things I pride myself on is using as little code and keeping the code neat and easily readable, not to mention as non-archaic as possible, makes for an easily maintainable project.
    So, having said all that, YES, the engine can be used as is right now. It does not reload plugins, but you can dynamically load plugins, handle dependency resolution, have a very fast xml read/write parser at your disposal for any plugin, and for the most part easily write plugins. That is all possible now. I should put the engine I have now up on my generic-plugin-engine sourceforge project one of these days, perhaps soon I will do that! While I have no problem handing out the code, I am currently the only committer and I don't have it loaded into CVS at this point. I would like to do so very soon.
    So, if you are interested, by all means, let me know and I'll be happy to send you what I have, and love to have more help on the next version of this.

Maybe you are looking for