Lib in different drives, is it more efficient?

How much more efficient is it for the CPU to have your lib spread out in different external drives? ex. strings in one drive, perc in another drive etc...
Thanks,
Guy

Thanks for the reply, but I wanted to go one step further since I often write in a symphonic texture,
so having each section, strings-WW-brass-perc- synth etc in different drives, will that make my system more efficient than if I put all the samples in one drive. I'm concerned because it involves some investments, but if it's confirmed that it will make a difference than I don't mind investing in a couple of more drives.

Similar Messages

  • New PC and want to move library to a different drive location from original PC

    I've just built my self a new PC and would like to move my iTunes music to my new PC.
    I know this is a pretty straight forward thing to do as I have done it on more than one occassion before without any hassle.
    It's a little different this time though, I need to move the libary to a different drive from the default location as my C: drive is an SSD and can't fit my entire libary on it.
    I've got my iTunes music folder backed up on an external drive, I thought it would just be a case of installing iTunes on my new PC copying my music to the drive I want it on and telling iTunes that this will be my new storage location.
    After a bit of playing around I managed to get iTunes to see all my music again.
    I'm having trouble getting it to find my cover artwork though and my old ratings and playlist count.
    Does anyone know how to restore these?
    Also do I have to have 2 iTunes fodlers on the computer? 1 in the stanard C: drive loaction and 1 where I now have all my music. Can I not just have it all on the 2nd drive?
    If I copy the 4 libary files from my backup into the new folder it gives me my old ratings and playlist count back but loses all the artwork and most of the files have broken links to them.
    Sorry if I've not explained what I am trying to say very clearly, thanks for any help.

    In iTunes under Edit > Preferences > Advanced turn off Keep iTunes Media folder organized, then click OK.
    Use Edit > Preferences > Advanced to change the media folder location to D:\Music\iTunes\iTunes Media, then click OK. If iTunes asks to consolidate or move any files at this point say no or cancel.
    Open the menu File > Library > Organize Library... tick Rearrange files in the folder "iTunes Media", and click OK.
    In iTunes under Edit > Preferences > Advanced turn on Keep iTunes Media folder organized, then click OK.
    Open the menu File > Library > Organize Library... tick Consolidate files, and click OK.
    This should rearrange the media folder into the current structure with the minimum amount of physical file moves.
    At this stage you should find that iTunes is now looking for your mobile apps at D:\Music\iTunes\iTunes Media\Mobile Applications. Check the properties of a single app just to make sure. If all is well you can reclaim the space on C:.
    If it were me at this stage I would now close iTunes, then copy the library files and album artwork folder from wherever they are on C: into D:\Music\iTunes then shift-start-iTunes and choose the library at D:\Music\iTunes. It is at this point that the library is now truly portable.
    Having made that work I would then close iTunes, move the iTunes folder up to D:\iTunes and shift-start-iTunes to open it, but that's my preference.
    tt2

  • Linking from one PDF to another: Is there a more efficient way?

    Some background first:
    We make a large catalog (400pages) in Indesign and it's updated every year. We are a wholesale distributor and our pricing changes so we also make a price list with price ref # that corresponded with #s printed in the main catalogue.  Last year we also made this catalog interactive so that a pdf of it could be browsed using links and bookmarks. This is not too difficult using Indesign and making any adjustments in the exported PDF. Here is the part that becomes tedious and is especially so this year:
    We also set up links in the main catalog that go to the price list pdf - opening the page with the item's price ref # and prices... Here's my biggest issue - I have not found any way to do this except making links one at a time in Acrobat Pro (and setting various specifications like focus and action and which page (in the price list) to open) Last year this wasn't too bad because we used only one price list. It still took some time to go through and set up 400-500 links individually.
    This year we've simplified our linking a little by putting only one link per page but that is still 400 links. And this year I have 6 different price lists (price tiers...) to link to the main catalogue pdf. (That's in the neighborhood of 1200-1500 double clicking the link(button) to open Button Properties, click Actions tab, click Add..."Go to page view" , set link to other pdf page, click edit, change Open in to "New Window" and set Zoom.  This isn't a big deal if you only have a few Next, Previous, Home kind of buttons....but it's huge when you have hundreds of links. Surely there's a better way?
    Is there anyway in Acrobat or Indesign to more efficiently create and edit hundreds of links from one pdf to another?
    If anything is unclear and my question doesn't make sense please ask. I will do my best to help you answer my questions.
    Thanks

    George, I looked at the article talking about the fdf files and it sounds interesting. I've gathered that I could manipulate the pdf links by making an fdf file and importing that into the PDF, correct?
    Now, I wondered - can I export an fdf from the current pdf and then change what is in there and import it back into the pdf.  I've tried this (Forms>More Form Options>Manage Form Data>Export Data) and then opened the fdf in a text editor but I see nothing related to the documents links... I assume this is because the links are 'form' data to begin with - but is there away to export something with link data like that described in the article link you provided?
    Thanks

  • How to split up a large iTunes Media folder onto different drives?

    Hi everyone. Not too long ago I committed to digital-only media, which i am very happy with. The problem i am running into now is that my iTunes media folder is now only 100GB or so shy of the 2TB mark, so it will soon outgrow the largest of hard drives available. I have considered getting a RAID card for my Mac Pro to setup a RAID 0 with two 2TB drives, but I don't think i should have to drop $700 just to get my iTunes media organized onto any of the four 2TB drives i already have.
    Ideally i would like to move my Movies folder (650GB) to one drive, TV shows(850GB) to another, and everything else to yet another folder. The problem i foresee with this strategy is that iTunes will no longer "Keep iTunes media Folder organized" or "Copy files to iTunes media folder when adding to library". It would be a huge pain to have to manually manage the file structure of such a huge library.
    I have also tried leaving an alias ("Movies" for example) that points to my movies folder on a separate drive, but iTunes doesn't seem to want to follow the alias.
    Anyway, i figured this scenario has to have come up before, and there may be some other solutions out there. Any help would be most appreciated.
    Message was edited by: acosmichippo54

    acosmichippo54 wrote:
    ok, after reviewing those tips, these problems still bug me:
    Macyourself: +"Any time you want to add a new movie or TV show to your library, move it over to the appropriate folder on your external hard drive, hold Option, and drag it to iTunes.+
    i agree this involves a little more +user interaction+ but once you get into a routine, it is really quite simple to manage. for example, i keep all my *Apple Lossless* tunes on an external USB drive. i add newly ripped CD's to that external, then option drag the folder into iTunes and have iTunes convert those tunes to AAC 256 (which are thus kept in the main library).
    machintsandtips: +"This does involve a couple of extra steps for movies and TV shows I automatically download from the iTunes store, since I have to move that media to the external drive, and then move it back into iTunes."+
    again, splitting your library across various drives does involve more user interaction. your only alternative would be e.g. a 4 gig (or more) Drobo or similar. however, in order to have a backup, you would need another, equally sized, Drobo which you would need to keep +in sync+ with the primary one.
    Everything you linked to will definitely resolve the problem for my current library, but as soon as i start adding media, it seems like it will get quickly disorganized.
    i humbly disagree. it all depends on what you want: dish out $$$ for bigger sized drives or split your lib across various (already available) drives - whichever path you decide on, you'll be more involved in maintaining your library - i consider that a plus coz it let's me manage things as i want.
    however, there are tools that could help you manage your split-lib, one of which being TuneSpan.
    best of luck !
    JGG

  • More-efficient keyboard

    Anyone have a way to get a better onscreen keyboard going than the QWERTY one that is stock on the iPhone? I love the FITALY (fitaly.com) keyboard on my old Palm, but that company says Apple won't let them do a system keyboard. One would like to think that a Dvorak or other non-18th century keyboard could be available among the international keyboards, but I don't find one.
    And yes, I've already tried the alternatives, e.g., TikiNotes, which takes me three times as long to type on than the Qwerty.
    If not, this would be a great suggestion for a new feature for Apple to incorporate.

    Just to let everybody know I am now 28 years old. I first learned how to type in typing class in junior high. I was using the Qwerty layout. The only nice thing I can say about the Qwerty layout is that it's available at any computer you want to use without any configuration.
    Then I looked online for a better way to type and more efficient. That's when I learned about Dvorak keyboard layout. This was about four years ago. I stuck with it for about two years. I felt my right hand was doing a lot more typing than my left hand. It felt too lopsided for me. But that's just my opinion. I went on the hunt for something better than Dvorak and I found the glorious Colemak keyboard layout.
    I have been typing with it ever since. My hands are a lot more comfortable and I can type faster now. It took me a month to actually get comfortable with the keyboard layout. If you actually go to this Java applet on Colemak's website.
    www.colemak.com/Compare
    You can just copy and paste a body of text and click on Calculate it will analyze the typing and compare the three different keyboard layouts. I just hope it becomes an ANSI standard like Dvorak has. I hope that happens in the future.
    I just want everybody to know there is a third option out there and its great. If ever Colemak goes away I will be going back to Dvorak. I will never learn the Qwerty keyboard layout ever again.
    Just wanted to give my two cents worth.

  • How do i move my itunes library to a different drive

    i underestimated the space i needed for my itunes library. at the time, i thought 75gb is more than enought for my 60gb ipod, then the videos and movies came along. although i cannot put them on my 4th gen ipod, but i did downloaded a bunch of free tv shows and video podcasts.
    i thought i was simply gonna delete some already-viewed files as i push the limit, but then the recycle bin on my ipod specific drive screwed up. i can no longer delete anything off that drive. since the partition i had earlier was kind small for my ipod drive, i think i want to reformat the drive to a different partition.
    of course, i need to move my itunes library first. can someone please teach me how to move my itunes library to a different drive? thank you very much.

    When I added a second drive internal drive I used this guide: Moving your iTunes Music Folder

  • Different Drive Structure Between Principal and Mirror

    We havea 2-node (active/active) cluster with four failover instances installed. We use a dedicated drive for data files, one for log files and one for tempdb per instance .Predicting that we will be adding more instances in the future, we realized that we
    would run out of drive letters to set up. We intend to switch over and use mounted volumes instead that way we only use 1 drive letter per instance and will be able to add many more failover instances in the future. However, seen as we have over 150 databases
    across all current instances, reconfiguring the cluster drive setup will require a lot of time and work and is not in the immediate plans for our team (me :( ). I need to mirror as many databases as possible, based on priority, to a DR server on a geographically
    remote location. Since this server isn't setup with SQLServer yet, I was wondering if it would be a good idea to have the DR server set up with mount volumes now and establish mirroring between our production environment and the DR server that way so that
    at least half the battle is won. I am aware that the best practice for mirroring is to have the same drive structure between the principal and mirror server, however it is not a requirement. The only example issue I have seen with this set up is when you add
    a data file to a database and the DDL statements get sent over to the mirror and will fail since the drive setup will not be the same. This is something I am willing to deal with since adding data files to our databases is not something we do often, or at
    all. As new databases are added and mirrored, the WITH MOVE clause of the RESTORE statement can be used to place the files in their correct location in the mirror server.
    Other than this, are there any other things to consider for this kind of mirroring set up?
    Leroy G. Brown

    Hi Leroy G,
    According to my knowledge, you can deploy database mirroring with different drive structure. And the known issue is that if you add a file to the Principal, the DDL commands get applied across the Mirror, then your database will go into a suspended
    status until you duplicate the path on the mirror.
    However, as your post, you can use WITH MOVE option to create database on Mirror server. For more details, please follow the steps in this
    blog.
    Meanwhile, about setting up database mirroring, I recommend you review this online
    article.
    Thanks,
    Lydia Zhang

  • I need a more efficient method of transferin​g data from RT in a FP2010 to the host.

    I am currently using LV6.1.
    My host program is currently using Datasocket to read and write data to and from a Field Point 2010 system. My controls and indicators are defined as datasockets. In FP I have an RT loop talking to a communication loop using RT-FIFO's. The communication loop is using Publish to send and receive via the Datasocket indicators and controls in the host program. I am running out of bandwidth in getting data to and from the host and there is not very much data. The RT program includes 2 PID's and 2 filters. There are 10 floats going to the Host and 10 floats coming back from the Host. The desired Time Critical Loop time is 20ms. The actual loop time is about 14ms. Data is moving back and forth between Host and FP several times a second without regularity(not a problem). If I add a couple more floats each direction, the communications goes to once every several seconds(too slow).
    Is there a more efficient method of transfering data back and forth between the Host and the FP system?
    Will LV8 provide faster communications between the host and the FP system? I may have the option of moving up.
    Thanks,
    Chris

    Chris, 
    Sounds like you might be maxing out the CPU on the Fieldpoint.
    Datasocket is considered a pretty slow method of moving data between hosts and targets as it has quite a bit of overhead assosciated with it.  There are several things you could do. One, instead of using a datasocket for each float you want to transfer (which I assume you are doing), try using an array of floats and use just one datasocket transfer for the whole array.  This is often quite a bit faster than calling a publish VI for many different variables.
    Also, as Xu mentioned, using a raw TCP connection would be the fastest way to move data.  I would recommend taking a look at the TCP examples that ship with LabVIEW to see how to effectively use these. 
    LabVIEW 8 introduced the shared variable, which when network enabled, makes data transfer very simple and is quite a bit faster than a comparable datasocket transfer.  While faster than datasocket, they are still slower than just flat out using a raw TCP connection, but they are much more flexible.  Also, the shared variables can fucntion in the RT fifo capacity and clean up your diagram quite a bit (while maintaining the RT fifo functionality).
    Hope this helps.
    --Paul Mandeltort
    Automotive and Industrial Communications Product Marketing

  • CS5 install on case-sensitive file system - can't choose different drive (Mac OS)

    I just upgraded my macbook pro to a new drive and 10.6, and chose 'case sensitive' HFSX, 'cause I'm a heavy command line user and wanted the maximum BASH experience.
    I'm trying to install the CS5 demo to try some web design tools, and the installer immediately says "Installation to case-sensitive drives is not supported. Please choose a different drive location to install." So case-sensitive drives aren't supported; crappy but fair enough.
    The error message leads me to think that I can just choose a non-case-sensitive drive to install to, but I never get a chance to pick one - I click on the installer and it goes straight to the error message.
    So - how do I pick a different drive to install to? Am I just an idoit, is there no way to select a different drive, or will it not install on a system that even BOOTS from a c.s. drive, regardless of the format of the drive that CS5 is installed to?
    I called the support number, and the poor fellow on the other end suggested I re-download the demo, and if the new download fails call Apple support to report my 'drive error'.
    I'm hoping to avoid an entire backup-reformat-restore and lose CLI compatibility just to try some demo software.
    ch

    That is part of why I would prefer case sensitive by default.   I know some server packages do the folding for you, same as some web servers do not differentiate between 'htm' and 'html' when people type in requests, but most of the time the backend server is going to be case sensitive and it is not safe to assume (or hope) that the service will fix things.  Compensating for mistakes is fine, but allowing such silent corruption is not a terribly laudable things and it encourage people be careless.
    Every once in a while I do encounter someone submitting some work where their configuration values and file names do not match, and 'well my laptop silently fixes it for me since it does not care' is a poor excuse.  And if I sent broken filenames upstream or even worse commit them to be used on a server, that is a pretty significant professional failure.
    Back to Adobe specifically, I have been trying the suggestion on poster mentioned in where one installs the Adobe applications to a case insensitive drive then copy over the installed files.  This does not quite work out of the box, but for reasons I would be hard pressed to believe are Apple's fault.
    For instance the first error I encounter is the inability of Bridge to load:
    "@executable_path/../Frameworks/WRServices.framework/Versions/A/WRServices"
    When I go look inside the app directories I can see that in Bridge the file has been named 'awrservices', but in Illustrator it is correctly named AWRServices.   So it looks more like a problem in whatever version control they are using.  The only way I can picture (which my adminitialy limited knowledge of what I am sure is a large and complex project with all sorts of legacy issues) that the installer toolchain factors in as a problem is if they have mismatches in their own scripts/packaging and have been depending on HFS's bad behavior to hide the problem.  I can understand not wanting to invest the time to pay down the technical debt on such an issue, but having such errors in your configuration causes long term headaches.
    And I say this as someone who worked on just such a project, moving a software suite that had legacy code stretching back longer than Adobe has existed as a company.  This conversion included moving from a case insenstive filesystem to a case sensitive one and yeah, there were lots of problems that the old FAT32 system hid from us, but it really paid off over the long run to fix them rather than try to twist the code to compensate.
    Having said that, if the problem is really that they do not want to go update their filenames (in version control or config files), then you can always add folding to your loaders.  I have had to do that a few times due to upstream people developing on case insensitive systems and sending data files with incorrect file names.  This is an old class of problem, and while I can empathize with the struggles project managers have trying to get approval for paying down technical debt, the problem never gets better on its own and usually gets worse.
    Which is why I responded with so much grump to the 'I never needed it' argument since that is exactly the type of customer comment that marketing tends to point to in order to push such things off the schedule.  This is the type of thing where the customer does not really know what they want because they are already accustomed to broken behavior and most of the problems are hidden from their immediate view.  It is easy to cover up the limitations since modern UI (and their search capabilities) can handle this. 
    It is not just arcane developers stuff, and it is the same transition that people have made with things like spaces, quotes, and parentheses, where years ago users believed they had no need for them since they were not using them, but they were only not using them because they did not work.   Today try to tell a modern user they can not put (, ", ) or even ' ' in their filenames and they would rightly question why this piece of obvious functionality is not working since today they are used to it working and no longer automatically compensate for it.
    I also find it ironic that by default OSX hides a number of file extensions, so from the user's perspective you can have multiple files with the exact same name displayed to them, so you can get display issues where 'foo' is the same as 'foO' if both have .txt, but 'foo' and 'foO' are not the same if one has .txt while the other is .pdf.  Add to this confusion cases like 'foo.txt' and 'foo.pdf" both being shortened to 'foo'.

  • Move all pictures of current catalog to different drive and folder

    The drive I use today is getting to small for more pictures in the future.
    I want to move all my pictures of the current catalog to a new drive (different drive letter) and new file folder structure (directory name).
    How can I do this in the easiest way in Lightroom 1.4.1 and WinXP?
    Thanks for help, Thomas

    Launch LR by double clicking the .lrcat file on the external. That registers that catalog. You can choose in Prefs to always launch with that catalog. Another thing you might do after ascertaining that all folders were copied over is delete the image files on the internal, but not the the .lrcat and Previews file. That way, you'll still have the use of the previews on the internal, but save huge space by the deleting of the image files there.
    Don't forget a backup scheme that encompasses the external!

  • Suggests for a more efficient query?

    I have a client (customer) that uses a 3rd party software to display graphs of their systems. The clients are constantly asking me (the DBA consultant) to fix the database so it runs faster. I've done as much tuning as I can on the database side. It's now time to address the application issues. The good news is my client is the 4th largest customer of this 3rd party software and the software company has listened and responded in the past to suggestions.
    All of the tables are setup the same with the first column being a DATE datatype and the remaining columns are values for different data points (data_col1, data_col2, etc.). Oh, that first date column is always named "timestamp" in LOWER case so got to use double quotes around that column name all of the time. Each table collects one record per minute per day per year. There are 4 database systems, about 150 tables per system, averaging 20 data columns per table. I did partition each table by month and added a local index on the "timestamp" column. That brought the full table scans down to full partition index scans.
    All of the SELECT queries look like the following with changes in the column name, table name and date ranges. (Yes, we will be addressing the issue of incorporating bind variables for the dates with the software provider.)
    Can anyone suggest a more efficient query? I've been trying some analytic function queries but haven't come up with the correct results yet.
    SELECT "timestamp" AS "timestamp", "DATA_COL1" AS "DATA_COL1"
    FROM "T_TABLE"
    WHERE "timestamp" >=
    (SELECT MIN("tb"."timestamp") AS "timestamp"
    FROM (SELECT MAX("timestamp") AS "timestamp"
    FROM "T_TABLE"
    WHERE "timestamp" <
    TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS')
    UNION
    SELECT MIN("timestamp")
    FROM "T_TABLE"
    WHERE "timestamp" >=
    TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS')) "tb"
    WHERE NOT "timestamp" IS NULL)
    AND "timestamp" <=
    (SELECT MAX("tb"."timestamp") AS "timestamp"
    FROM (SELECT MIN("timestamp") AS "timestamp"
    FROM "T_TABLE"
    WHERE "timestamp" >
    TO_DATE('2006-01-21 12:12:39', 'YYYY-MM-DD HH24:MI:SS')
    UNION
    SELECT MAX("timestamp")
    FROM "T_TABLE"
    WHERE "timestamp" <=
    TO_DATE('2006-01-21 12:12:39', 'YYYY-MM-DD HH24:MI:SS')) "tb"
    WHERE NOT "timestamp" IS NULL)
    ORDER BY "timestamp"
    Here are the queries for a sample table to test with:
    CREATE TABLE T_TABLE
    ( "timestamp" DATE,
    DATA_COL1 NUMBER
    INSERT INTO T_TABLE
    (SELECT TO_DATE('01/20/2006', 'MM/DD/YYYY') + (LEVEL-1) * 1/1440,
    LEVEL * 0.1
    FROM dual CONNECT BY 1=1
    AND LEVEL <= (TO_DATE('01/25/2006','MM/DD/YYYY') - TO_DATE('01/20/2006', 'MM/DD/YYYY'))*1440)
    Thanks.

    No need for analytic functions here (they’ll likely be slower).
    1. No need for UNION ... use UNION ALL.
    2. No need for <quote>WHERE NOT "timestamp" IS NULL</quote> … the MIN and MAX will take care of nulls.
    3. Ask if they really need the data sorted … the s/w with the graphs may do its own sorting
    … in which case take the ORDER BY out too.
    4. Make sure to have indexes on "timestamp".
    What you want to see for those innermost MAX/MIN subqueries are executions like:
    03:19:12 session_148> SELECT MAX(ts) AS ts
    03:19:14   2  FROM "T_TABLE"
    03:19:14   3  WHERE ts < TO_DATE('2006-01-21 00:12:39', 'YYYY-MM-DD HH24:MI:SS');
    TS
    21-jan-2006 00:12:00
    Execution Plan
       0   SELECT STATEMENT Optimizer=ALL_ROWS (Cost=2.0013301108 Card=1 Bytes=9)
       1    0   SORT (AGGREGATE)
       2    1     FIRST ROW (Cost=2.0013301108 Card=1453 Bytes=13077)
       3    2       INDEX (RANGE SCAN (MIN/MAX))OF 'T_IDX' (INDEX) (Cost=2.0013301108 Card=1453 Bytes=13077)

  • Updating table contents more efficiently

    Dear forumers,
    I have a situation as elaborated below. Is there a more efficient way in updating table contents (itab D) more efficiently?
    loop at itab A.
      execute FM using fields from itab A.
      FM returns itab B.
      loop at itab B.
        populate itab C containing fields from itabs A and B.
      endloop.
    endloop.
    copy contents from itab D to itab D_temp.   " ***1
    clear itab D.
    loop at itab C.
    populate itab D containing fields from itabs C and D_temp.   " ***2
    endloop.
    Reason this codeline (***2) is implemented:
    The key fields in the initial itab D ***1 is different than the key fields in the end result of itab D ***2.
    ***1
    Key fields: AUFNR, OBKNR, WERKS, EQUNR, SERNR, MATNR
    ***2
    Key fields: DOCUMENTTYPE, DOCUMENTNUMBER, DOCUMENTVERSION, DOCUMENTPART, EQUNR, SERNR, MATNR
    Structure in itab C (key fields in ***2):
      equnr           TYPE viaufkst-equnr
      sernr           TYPE objk-sernr
      matnr           TYPE objk-matnr
      documenttype    TYPE zzdoc_data-documenttype
      documentnumber  TYPE zzdoc_data-documentnumber
      documentversion TYPE zzdoc_data-documentversion
      documentpart    TYPE zzdoc_data-documentpart
      description     TYPE zzdoc_data-description
    Structure in itab D:
      aufnr           TYPE viaufkst-aufnr
      equnr           TYPE viaufkst-equnr
      obknr           TYPE viaufkst-obknr
      qmnum           TYPE viaufkst-qmnum
      auart           TYPE viaufkst-auart
      werks           TYPE viaufkst-werks
      sernr           TYPE objk-sernr
      matnr           TYPE objk-matnr
      documenttype    TYPE zzdoc_data-documenttype
      documentnumber  TYPE zzdoc_data-documentnumber
      documentversion TYPE zzdoc_data-documentversion
      documentpart    TYPE zzdoc_data-documentpart
      description     TYPE zzdoc_data-description
    Many thanks for any inputs here at all.

    TRY THIS
    CLEAR li_ord_sln_tmp.
      li_ord_sln_tmp = i_ord_sln.
      SORT li_ord_sln_tmp BY equnr sernr matnr.
      DELETE ADJACENT DUPLICATES FROM li_ord_sln_tmp
        COMPARING equnr sernr matnr.
      CLEAR i_ord_sln.
      IF ( li_ord_sln_tmp[] IS NOT INITIAL AND
           i_docs[] IS NOT INITIAL ).
        SORT i_docs[] BY equnr sernr matnr.
      ENDIF.
    LOOP AT i_outgoing into w_outgoing.
        CALL FUNCTION 'ZZ_GET_ACTIVE_DMS'
          EXPORTING
            matnr      = w_outgoing-matnr
            werks      = w_outgoing-werks
            sernr      = w_outgoing-sernr
          TABLES
            i_doc_list = li_doclist
          EXCEPTIONS
            OTHERS     = 1.
        IF sy-subrc = 0.
          LOOP AT li_doclist INTO lw_doclist.
          CLEAR .
          MOVE-CORRESPONDING w_outgoing TO w_ord_sln.
          READ TABLE li_ord_sln_tmp
            INTO lw_ord_sln_tmp
            WITH KEY equnr = w_docs-equnr
                     sernr = w_docs-sernr
                     matnr = w_docs-matnr
            BINARY SEARCH.
          IF sy-subrc = 0.
            w_ord_sln-werks = w_docs-werks.
            w_ord_sln-aufnr = lw_ord_sln_tmp-aufnr.
            w_ord_sln-obknr = lw_ord_sln_tmp-obknr.
            w_ord_sln-qmnum = lw_ord_sln_tmp-qmnum.
            w_ord_sln-auart = lw_ord_sln_tmp-auart.
            APPEND w_ord_sln TO i_ord_sln.     " (i.e. this is itab D)
          ENDIF.  
        ENDIF.
      ENDLOOP.
    Regards
    Sajid
    Edited by: shaik sajid on Jul 20, 2009 8:52 AM

  • I'm trying to install OEM SR 2.1: Windows 95B in Qemu...  How do I figure out which brand of CD-ROM drive Qemu emulates since I have three different drive specific boot disks...?

    I'm trying to install OEM SR 2.1: Windows 95B in Qemu...  How do I figure out which brand of CD-ROM drive Qemu emulates since I have three different drive specific boot disks...?

    Not exactly...; all MS produced Win9X/ME install CDs are ISO-9660 & maybe even Joliet too...  The really confusing part is that not all Windows 98 SE full install CDs are even bootable in the first place (they tend to only ship one if the PC is capable of booting directly from its optical drive...; the vast majority of PCs have to use the chain loading method due to buggy, poorly written, outdated BIOSes...; etc.).  OEM SR 2.1/Windows 95B is really quite special & innovative for its time in that it was the first Service Release of Windows 95 to support larger HDs via FAT32 which back then was still brand new: i.e.: exciting technology & it still has proven to be quite useful & versatile since most flash memory storage devices come pre-formatted FAT32.
    There's also a hidden X:\Usbsupp folder on the CD that contains the MS USB 1.1 supplement patches/upgrader: a common mis-nomer is that you don't need to install it unless your PC has USB 1.1 ports.  Even without USB 1.1 ports the upgrades to C:\Windows\system make it a lot less buggy, more secure & stable.  For anyone else that enjoys emulating OEM SR 2.1 there's also a great unofficial SP 1 for it which has so many useful patches in it I'm considering integrating it into all of my install CDs... 
    Unfortunately I've found that getting it to run decently requires tracking down a bevy of essential but according to MS's not releasing them in patch form optional DLL upgrades...  I guess not all machines are that unreliable but better to have them...; just in case you encounter a PC with a buggy BIOS that has hardware coniptions and/or ACPI/APM bugs.

  • How to Install Acrobet Reader 9 on Different Drive

    Does someone know how to direct acrobat rader 9 toa different drive when installing? It always defaults to drive C. I want it on drive D. Help!

    There are hundreds (if not thousands) of references to registry files in \Windows\System 32 from within Reader. The registry keys likewise point to a program in the C:\Program Files\Adobe\Reader folder.
    Unless you are capable and willing to go in and rewrite the code for each and every single reference in every last file of the program so that they reference a registry on a separate drive from the program itself, and vice-versa, you will find that it won't work, and if it does it'll keep crashing, and giving you errors all the time.
    There are portable readers which are designed to run without the registry references (like from a Flash drive) but they won't function as well as Reader in every case.
    You might want to try running your disk cleanup, defragmenting your C:\ drive, and running a compression to free up file space.
    Another thing I like to pass along is a"deep cleaning" Disk Cleanup tool
    Disk Cleanup can clear out a lot of old, unused, and junk  files from your hard drive. Things that clutter up the drive, and make  defragmenting take a lot more time than it should. Here's a neat way  to clear up more unused files on your system than Windows™ Disk Cleanup  utility will do on it's own:
    c:\windows\system32\cleanmgr.exe /dc /sageset: 1
    c:
    cd \
    cd c:\windows\prefetch
    del *.* /q
    Copy the code above and paste it into a  document using Notepad or Wordpad (Do not use  Microsoft™ Word™, or Corel™ WordPerfect™). Save the file as "clean.bat"  (minus the quotes of course), a Windows™ batch file. To do this click,  Save As instead of Save, and choose all files from the File Type drop  down menu. Save it to your desktop.
    Double click the batch file, and it will open a  configuration tool for your Disk Cleanup utility. You will see a lot  more checkboxes in the list than your regular Disk Cleanup shows. Check EVERYTHING. Don't worry, these are all files your system doesn't use.
    Once you have clicked all the checkboxes, click OK and the  window will close. You have now set your Disk Cleanup to thoroughly  clean your Hard Drive.
    Now we need to go back to the batch file. Right  click it, and select Edit. It will open in Notepad or Wordpad, depending  on which one you made it in. When it opens, go to the first line, where  it reads "sageset", and change it to "sagerun" (again, minus the  quotes, of course) and save it. You don't need to choose a destination  or file type. Just save it.
    Now double click the batch file again, and it will run  the Disk Cleanup with your deep cleaning settings.
    Copy the batch file to your documents folder and  any time you need to run it, just double click it. Now here's another  tip that will free up a lot of hard drive space on top of the deep  clean.
    Also:
    If you open the regular Disk Cleanup (Start\All Programs\Accessories\System Tools\Disk Cleanup), when it opens there is a  More Options tab at the top of the window. Click the tab, and at the  bottom you should see a section referring to System Restore and Shadow  Copies (Vista) or System Restore (XP).
    Click Clean Up to remove the extra restore points  and Shadow Copies of Windows™ that are stored on your hard drive for  system restoration. Don't worry, this will only remove all but the most  recent restore point.
    You should see anywhere from 3.5 to 14 gigs of hard  drive space freed up once you do this.
    One last thing:
    If you have a C:\ and a D:\ drive, the D:\ drive should ideally be used for file storage.
    In other words, all of your pictures, movies, documents, saved e-mails, etc., anything that ISN'T a program... should be on this drive to keep your C:\ drive working better:
    All of these files opening and closing on the C:\ drive will lead to fragmentation of your C:\ drive and that will eventually cause the whole system to slow down, and could kill your hard drive.
    Additionally, you mention that you are low on space on the C:\ drive. That's another bad thing. I've learned over the years that you should never let a working drive (as in one that is running your operating system or even one that holds files you access daily) get below 25% free space. The caching of files as you work on them gets extremely slow if the drive has to jump back and forth between sides of the actual discs to record changes as you make them. This is much less likely when you have 25% or more of the drive free.
    Your documents, pictures, music and movies will open from the D:\ drive where software won't always.  It's easier to move your files to the D:\ drive to make space than to install a second OS and then reinstall all of your software.

  • Install DPM agent on Different Drive

    Hi is it possible to install DPM agent on different drive rather than C:\

    Yes, you can by deploying DPM agent manually.
    To change the directory, do the following:
    For a 64-bit computer type cd /d <assigned drive letter>:\Program Files\Microsoft DPM\DPM\ProtectionAgents\RA\3.0.<build number>.0\amd64 where
    <assigned drive letter> is the drive letter that you assigned in the previous step and
    <build number> is the latest DPM build number. For example:
    cd /d X:\Program Files\Microsoft DPM\DPM\ProtectionAgents\RA\3.0.7696.0\amd64
    For more info., you can refer below link
    http://technet.microsoft.com/en-us/library/hh758186.aspx
    Please remember, if you see a post that helped you please click "Vote As Helpful" and if it answered your question, please click "Mark As Answer"
    Mai Ali | My blog: Technical | Twitter:
    Mai Ali

Maybe you are looking for