NB305 - Issues moving large volume of files..

HI
I recently bought the NB305 and atemted to move approx 1800 files... 1) from external hard drive to C drive and 2) c drive to walkman... both time ended in disaster and seemed to only copy about 100 random files then gave up...
any thoughts ??

Hi
Thanks for your responce..
HDD: Im using the same cable that i used to move the files from my Satelite laptop (XP) on to the HDD and that worked fine. I also tried two UBS ports on the NB305 and both with failure..
The way i had to do it was to move the files in smaller chunks..
As for the walkman it worked on the satelite and other Pc's fine..
Cheers

Similar Messages

  • Ways to handle large volume data (file size = 60MB) in PI 7.0 file to file

    Hi,
    In a file to file scenario (flat file to xml file), the flat file is getting picked up by FCC and then send to XI. In xi its performing message mapping and then XSL transformation in a sequence.
    The scenario is working fine for small files (size upto 5MB) but when the input flat file size is more then 60 MB, then XI is showing lots of problem like (1) JCo call error or (2) some times even XI is stoped and we have to strat it manually again to function properly.
    Please suggest some way to handle large volume (file size upto 60MB) in PI 7.0 file to file scenario.
    Best Regards,
    Madan Agrawal.

    Hi Madan,
    If every record of your source file was processed in a target system, maybe you could split your source file into several messages by setting up this in Recordset Per Messages parameter.
    However, you just want to convert you .txt file into a .xml file. So, try firstly to setting up
    EO_MSG_SIZE_LIMIT parameter in SXMB_ADM.
    However this could solve the problem in Inegration Engine, but the problem will persit in Adapter Engine, I mean,  JCo call error ...
    Take into account that file is first proccessed in Adapter Engine, File Content Conversion and so on...
    and then it is sent to the pipeline in Integration Engine.
    Carlos

  • Transferring large volume of files from mac to PC?

    Hi i have a mac with osx 10.4.8 and a pc, i need to transfer a large amount of files from mac to pc (around 200GB)
    Now i have lots of external HDs which are all used by either the PC or the macs. the ones formatted for the macs cannot be read atall by the PC without some expensive software, the PC formatted one i have appears to be readable on the mac, but when i try to copy the files onto it or change anything, i am not able to change/add anythingn due to permissions problems.
    Not sure what to do, i have quite a few HDs with plenty of sapce, but if they are all in the wrong format i cant really wipe/reformat them easily with files on them, nor do i want to buy yet another HDD just to transfer some files....
    Any ideas/advice?

    https://discussions.apple.com/docs/DOC-3003

  • In OSB , xquery issue with large volume data

    Hi ,
    I am facing one problem in xquery transformation in OSB.
    There is one xquery transformation where I am comparing all the records and if there are similar records i am clubbing them under same first node.
    Here i am reading the input file from the ftp process. This is perfectly working for the small size input data. When there is large input data then also its working , but its taking huge amount of time and the file is moving to error directory and i see the duplicate records created for the same input data. I am not seeing anything in the error log or normal log related to this file.
    How to check what is exactly causing the issue here,  why it is moving to error directory and why i am getting duplicate data for large input( approx 1GB).
    My Xquery is something like below.
    <InputParameters>
                    for $choice in $inputParameters1/choice              
                     let $withSamePrimaryID := ($inputParameters1/choice[PRIMARYID eq $choice/PRIMARYID])                
                     let $withSamePrimaryID8 := ($inputParameters1/choice[FIRSTNAME eq $choice/FIRSTNAME])
                     return
                      <choice>
                     if(data($withSamePrimaryID[1]/ClaimID) = data($withSamePrimaryID8[1]/ClaimID)) then
                     let $claimID:= $withSamePrimaryID[1]/ClaimID
                     return
                     <ClaimID>{$claimID}</ClaimID>                
                     else
                     <ClaimID>{ data($choice/ClaimID) }</ClaimID>

    HI ,
    I understand your use case is
    a) read the file ( from ftp location.. txt file hopefully)
    b) process the file ( your x query .. although will not get into details)
    c) what to do with the file ( send it backend system via Business Service?)
    Also noted the files with large size take long time to be processed . This depends on the memory/heap assigned to your JVM.
    Can say that is expected behaviour.
    the other point of file being moved to error dir etc - this could be the error handler doing the job ( if you one)
    if no error handlers - look at the timeout and error condition scenarios on your service.
    HTH

  • Unable to read large volume of file from PI to BW

    Hi,
    I am reading data from PI(XI) to BW using web service data source. The data reach the report if the xml contains less number of records (say 2000-3000) but it does not reach the report if the xml contains more number of data (say 10k).
    I checked in sxmb_moni transaction and the process is always scheduled status and the flag is green.
    Can somebody tell me asto what exactly the issue is?
    The BW version is 7.0.
    Thanks and Regards,
    Arya

    Hi,
    I'm also trying to integrate bw using webservice DS, but the probem is that even several records can't reach further sxmb_moni (with green flag there). When I use FM (u201C/BIC/CQu201D + DSName) open request appears in RSRDA (yellow) and I can see it in PSA. Then open request in RDA Monitor for DTP should be created, but nothing happens. (Daemon started)
    Could someone help me in such situation?
    Thanx in advance,
    Alex

  • Performance Issue in Large volume of data in report

    Hi,
    I have a report that will process large amount of data, but it takes too long to process the data into final ALV table, currently im using this logic.
    Select ....
    Select for all entries...
    Loop at table into workarea...
    read table2 where key = workarea-key binary search.
    modify table.
    read table2 where key = workarea-key binary search.
    modify table.
    endloop.
    Currently i select all data that i need (only fields necessary) create a big loop and read other table to insert it to the fields in the final table
    Edited by: Alvin Rosales on Apr 8, 2009 9:49 AM

    Hi ,
    You can use field symbols instead of work area.
    If you use field symbols there is no need of modify statement.
    Here are two equivalent code:
    1) using work areas :
    types: begin of  lty_example,
    col1 type char1,
    col2 type char1,
    col3 type char1,
    end of lty-example.
    data:lt_example type standard table of lty_example,
           lwa_example type lty_example.
    field-symbols : <lfs_example> type lty_example.
    suppose if you have the following information in your internal table
    col1 col2 col3
    1      1    1
    1      2    2
    2      3    4
    Now you may use the modify statement using work areas
    loop at lt_example into lwa_example.
    lwa_example-col2 = '9'.
    modify lt_example index sy-tabix from lwa_example transporting col2.
    endloop.
    or better using field-symbols:
    loop at lt_example assigning <lfs_example>
    <lfs_example>-col2 = '9'.
    *here there is no need of modify statement.
    endloop.
    The code using field-symbols is about 10 times faster tahn using work areas and modify statement.

  • Cannot Print PDF Large volume PDF.  Internal Error: 8004, 6343724, 8484240, 0

    We are having an issue with printing to PDF for large volume FM files.  PDF appear to work for small/medium volume PDFs however. 
    Internal Error: 8004, 6343724, 8484240, 0
    FrameMaker 8.0.0 for Intel
    Build: 8.0p273
    I've checked the default printer and it is set to Adobe PDF.  I've also tried to print to PDF to a different FM file just in case the current file might be corrupted and the same error appears.  At this point this issue is affecting our work as we cannot save it to PDF.  Does anyone have any ideas that I can try?
    Thanks
    Steve

    Shelia,
    Thank you for your reply.  Here are some further details on the question that you asked:
    1. At that beginning, because the PDF file has 226 pages as total and consists of 19 fm files, which I mean the big volume, I thought this problem occurs.  (FYI, the small/medium volume means the PDF file has 112 pages as total and consists of 16 fm files / only around 20 pages as total and consists of only 1 fm file.)
    However, now, I know the problem occurs in the specific two very small fm files.  These two fm files are only 3 pages / 15 pages as total and consists of only one fm file.  – So, I do not think the volume relates to this problem.
    2. I always use “save it to PDF” to create PDF files so far.  I am using the PDF distiller that came with FrameMaker8.
    3. All files including graphics are in my local drive (C drive).
    So I'm wondering if the problem is possible due to the two specific fm files.  If so, what are your recommendations to get around this?
    Thanks
    Steve

  • Store large volume of Image files, what is better ?  File System or Oracle

    I am working on a IM (Image Management) software that need to store and manage over 8.000.000 images.
    I am not sure if I have to use File System to store images or database (blob or clob).
    Until now I only used File System.
    Could someone that already have any experience with store large volume of images tell me what is the advantages and disadvantages to use File System or to use Oracle Database ?
    My initial database will have 8.000.000 images and it will grow 3.000.000 at year.
    Each image will have sizes between 200 KB and 8 MB, but the mean is 300 KB.
    I am using Oracle 10g I. I read in others forums about postgresql and firebird, that isn't good store images on database because always database crashes.
    I need to know if with Oracle is the same and why. Can I trust in Oracle for this large service ? There are tips to store files on database ?
    Thank's for help.
    Best Regards,
    Eduardo
    Brazil.

    1) Assuming I'm doing my math correctly, you're talking about an initial load of 2.4 TB of images with roughly 0.9 TB added per year, right? That sort of data volume certainly isn't going to cause Oracle to crash, but it does put you into the realm of a rather large database, so you have to be rather careful with the architecture.
    2) CLOBs store Character Large OBjects, so you would not use a CLOB to store binary data. You can use a BLOB. And that may be fine if you just want the database to be a bit-bucket for images. Given the volume of images you are going to have, though, I'm going to wager that you'll want the database to be a bit more sophisticated about how the images are handled, so you probably want to use [Oracle interMedia|http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14302/ch_intr.htm#IMURG1000] and store the data in OrdImage columns which provides a number of interfaces to better manage the data.
    3) Storing the data in a database would generally strike me as preferrable if only because of the recoverability implications. If you store data on a file system, you are inevitably going to have cases where an application writes a file and the transaction to insert the row into the database fails or a the transaction to delete a row from the database succeeds before the file is deleted, which can make things inconsistent (images with nothing in the database and database rows with no corresponding images). If something fails, you also can't restore the file system and the database to the same point in time.
    4) Given the volume of data you're dealing with, you may want to look closely at moving to 11g. There are substantial benefits to storing large objects in 11g with Advanced Compression (allowing you to compress the data in LOBs automatically and to automatically de-dupe data if you have similar images). SecureFile LOBs can also be used to substantially reduce the amount of REDO that gets generated when inserting data into a LOB column.
    Justin

  • I have over 200 hours of HD video on 5 different TB Thunderbolt GRaid hard drives. I need to reorganize my projects, moving large files from one drive to another. Advice?

    I have over 200 hours of HD video on 5 different TB Thunderbolt GRaid hard drives. I need to reorganize my projects, moving large files from one drive to another. Advice?

    Do some testing to get your method working right with some less than important footage.
    Copy/paste files where you want then.
    Use the FCE Reconnect feature to tell FCE where the newly copied files reside.
    Make sure the new location and files are working as expected with your Projects.
    Delete the original files if no longer required.
    Al

  • I have recently moved all of my files to a larger external hard drive, but when I am reminder of an itunes upgrade it's still attempting to go to the old drive.  How can I change the settings so it automatically goes to my new hard drive?

    I have recently moved all of my files to a larger external hard drive, but when I am reminder of an itunes upgrade it's still attempting to go to the old drive.  How can I change the settings so it automatically goes to my new hard drive?

    lisacooney wrote:
    I have recently moved all of my files to a larger external hard drive
    How did you move these files?

  • Large Volume Data Merge Issues with Indesign CS5

    I recently started using Indesign CS5 to design a marketing mail piece which requires merging data from Microsoft Excel.  With small tasks up to 100 pieces I do not have any issues.  However I need to merge 2,000-5,000 pieces of data through Indesign on a daily basis and my current lap top was not able to handle merging more than 250 pieces at a time, and the process of merging 250 pieces takes up to 30-45 mins if I get lucky and software does not crash.
    To solve this issue, I purchased a Desktop with a second generation Core i7 processor and 8GB of memerory thinking that this would solve my problem.  I tried to merge 1,000 piece of data with this new computer, and I was forced to restart Adobe Indesign after 45 mins of no results.  I then merged 500 pieces and the task was completed, but the process took a little less than 30 minutes.
    I need some help with this issue because I can not seem to find another software that can design my Mail Piece the way Indesign can, yet the time it takes to merge large volumes of data is very frustrating as the software does crash from time to time after waiting a good 30-45 mins for completion.
    Any feedback is greatly appreciated.
    Thank you!

    Operating System is Windows 7
    I do not know what you mean by Patched to 7.0.4
    I do not have a crash report, the software just freezes and I have to do a force close on the program.
    Thank you for your time...

  • Can't Empty Trash With Large Number of Files

    Running OS X 10.8.3
    I have a very large external drive that had a Time Machine backup on the main partition. At some point, I created a second partition, then started doing backups on the new partition. On Wed, I finally got around to doing some "housecleaning" tasks I'd been putting off. As part of that, I decided to clean up my external drive. So... I moved the old, unused and unwanted Backups.backupdb that used to be the Time Machine backup, and dragged it to the Trash.
    Bad idea.
    Now I've spent the last 3-4 days trying various strategies to actually empty the trash and reclaim the gig or so of space on my external drive.  Initially I just tried to "Empty Trash", but that took about four hours to count up the files just to "prepare to delete" them. After the file counter stopped counting up, and finally started counting down... "Deleting 482,832 files..." "Deleting 482,831 files..." etc, etc...  I decided I was on the path to success, so left the machine alone for 12-14 hours.
    When I came back, the results were not what I expected. "Deleting -582,032 files..."  What the...?
    So after leaving that to run for another few hours with no results, I stopped that process.  Tried a few other tools like Onyx, TrashIt, etc...  No luck.
    So finally decided to say the **** with the window manager, pulled up a terminal, and cd'ed to the .Trash directory for my UID on the USB volume and did a rm -rfv Backups.backupdb
    While it seemed to run okay for a while, I started getting errors saying "File not found..." and "Invalid file name..." and various other weird things.  So now I'm doing a combination of rm -rfing individual directories, and using the finder to rename/cleanup individual Folders when OSX refuses to delete them.
    Has anyone else had this weird overflow issue with deleting large numbers of files in 10.8.x? Doesn't seem like things should be this hard...

    I'm not sure I understand this bit:
    If you're on Leopard 10.5.x, be sure you have the "action" or "gear" icon in your Finder's toolbar (Finder > View > Customize Toolbar).  If there's no toolbar, click the lozenge at the upper-right of the Finder window's title bar.  If the "gear" icon isn’t in the toolbar, selectView > Customize Toolbar from the menubar.
    Then use the Time Machine "Star Wars" display:  Enter Time Machine by clicking the Time Machine icon in your Dock or select the TM icon in your Menubar.
    And this seems to defeat the whole purpose:
    If you delete an entire backup, it will disappear from the Timeline and the "cascade" of Finder windows, but it will not actually delete the backup copy of any item that was present at the time of any remaining backup. Thus you may not gain much space. This is usually fairly quick
    I'm trying to reclaim space on a volume that had a time machine backup, but that isn't needed anymore. I'm deleting it so I can get that 1GB+ of space back. Is there some "official" way you're supposed to delete these things where you get your hard drive space back?

  • PSD v's Layered TIF (on large volumes)

    platform is windows XP, InDesign CS
    we currently have 750GB volumes (windows 2003 servers) where the Indesign document and all graphic files reside.
    we are moving data to a new server (NetAPP Storage) the volume size is 3TB. We have noticed working with Indesign documents that have placed PSD files is slow, slow to place PSDs, slow to print to a printer or file. However if we save all the graphic elements as Layered TIF files then it all becomes quicker. Is there an issue with using LARGE volumes (or shares)? I ask this as we don't have speed issues with smaller volume sizes.
    We have also noted that when using InDesign CS3 with placed PSD's that reside on large volumes times are better, however upgrading to CS3 for us is not possible at this time as our desktop workstations are under speced. Hardware updates are planned but will take many months to implement.

    No these are not large files, they range between 5 to 80 mb in size. Adobe inform me that InDesign CS3 handles PSD files better. But I'm still very curious as to why is it when I have migrated data from a small volume to a very large volume is InDesign afected by it, that is whenever PSD files are involved.

  • Processing large volumes of data in PL/SQL

    I'm working on a project which requires us to process large volumes of data on a weekly/monthly/quarterly basis, and I'm not sure we are doing it right, so any tips would be greatly appreciated.
    Requirement
    Source data is in a flat file in "short-fat" format i.e. each data record (a "case") has a key and up to 2000 variable values.
    A typical weekly file would have maybe 10,000 such cases i.e. around 20 million variable values.
    But we don't know which variables are used each week until we get the file, or where they are in the file records (this is determined via a set of meta-data definitions that the user selects at runtime). This makes identifying and validating each variable value a little more interesting.
    Target is a "long-thin" table i.e. one record for each variable value (with numeric IDs as FKs to identify the parent variable and case.
    We only want to load variable values for cases which are entirely valid. This may be a merge i.e. variable values may already exist in the target table.
    There are various rules for validating the data against pre-existing data etc. These rules are specific to each variable, and have to be applied before we put the data in the target table. The users want to see the validation results - and may choose to bail out - before the data is written to the target table.
    Restrictions
    We have very limited permission to perform DDL e.g. to create new tables/indexes etc.
    We have no permission to use e.g. Oracle external tables, Oracle directories etc.
    We are working with standard Oracle tools i.e. PL/SQL and no DWH tools.
    DBAs are extremely resistant to giving us more disk space.
    We are on Oracle 9iR2, with no immediate prospect of moving to 10g.
    Current approach
    Source data is uploaded via SQL*Loader into static "short fat" tables.
    Some initial key validation is performed on these records.
    Dynamic SQL (plus BULK COLLECT etc) is used to pivot the short-fat data into an intermediate long-thin table, performing the validation on the fly via a combination of including reference values in the dynamic SQL and calling PL/SQL functions inside the dynamic SQL. This means we can pivot+validate the data in one step, and don't have to update the data with its validation status after we've pivoted it.
    This upload+pivot+validate step takes about 1 hour 15 minutes for around 15 million variable values.
    The subsequent "load to target table" step also has to apply substitution rules for certain "special values" or NULLs.
    We do this by BULK collecting the variable values from the intermediate long-thin table, for each valid case in turn, applying the substitution rules within the SQL, and inserting into/updating the target table as appropriate.
    Initially we did this via a SQL MERGE, but this was actually slower than doing an explicit check for existence and switching between INSERT and UPDATE accordingly (yes, that sounds fishy to me too).
    This "load" process takes around 90 minutes for the same 15 million variable values.
    Questions
    Why is it so slow? Our DBAs assure us we have lots of table-space etc, and that the server is plenty powerful enough.
    Any suggestions as to a better approach, given the restrictions we are working under?
    We've looked at Tom Kyte's stuff about creating temporary tables via CTAS, but we have had serious problems with dynamic SQL on this project, so we are very reluctant to introduce more of it unless it's absolutely necessary. In any case, we have serious problems getting permissions to create DB objects - tables, indexes etc - dynamically.
    So any advice would be gratefully received!
    Thanks,
    Chris

    We have 8 "short-fat" tables to hold the source data uploaded from the source file via SQL*Loader (the SQL*Loader step is fast). The data consists simply of strings of characters, which we treat simply as VARCHAR2 for the most part.
    These tables consist essentially of a case key (composite key initially) plus up to 250 data columns. 8*250 = 2000, so we can handle up to 2000 of these variable values. The source data may have 100 any number of variable values in each record, but each record in a given file has the same structure. Each file-load event may have a different set of variables in different locations, so we have to map the short-fat columns COL001 etc to the corresponding variable definition (for validation etc) at runtime.
    CASE_ID VARCHAR2(13)
    COL001 VARCHAR2(10)
    COL250     VARCHAR2(10)
    We do a bit of initial validation in the short-fat tables, setting a surrogate key for each case etc (this is fast), then we pivot+validate this short-fat data column-by-column into a "long-thin" intermediate table, as this is the target format and we need to store the validation results anyway.
    The intermediate table looks similar to this:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10) -- from COL001 etc
    STATUS VARCHAR2(10) -- set during the pivot+validate process above
    The target table looks very similar, but holds cumulative data for many weeks etc:
    CASE_NUM_ID NUMBER(10) -- surrogate key to identify the parent case more easily
    VARIABLE_ID NUMBER(10) -- PK of variable definition used for validation and in target table
    VARIABLE_VALUE VARCHAR2(10)
    We only ever load valid data into the target table.
    Chris

  • What is best way to active sync large quantities of files on work PCs across multiple mobil devices of employees in the field?

    I am looking for the best way for us to file share between our PC and mobil devices?  We have tons of PDF files and would benefit from being able to view/edit while in the field but are not sure of the best way to do so…
    - iCloud drive isn’t ideal because it requires everyone to have and maintain their own files via their individual devices and iCloud addresses
    - Dropbox seems to work, but it doesn’t actually store a local copy of the files so we have to load each time to view and their document viewers are not the most user friendly
    - Google Drive we used a long time ago, when it first came out, and it kind of worked well, but due to the volume of files and size of folders we constantly ran into crashing issues and sync problems so we gave up on it…
    Ideally we would like to have local folders on our desktop that activesync to our mobil devices… Do you know of a way that this would be possible or do we need to purchase one of those large, robust field management programs?

    Hi Bob.
    So what I should have done on my PC for all my files is gone to the File menu and used the Package command which would have converted it into a file with the images in it so they wouldn't need to be relinked.
    From now on I will only be working on Mac (I gave away my PC) but I'm sure I could use a friend's pc (or at a later point install Windows on my Mac). But to use the Package command on PC I need to actually have the images in the correct folder on the PC right?

Maybe you are looking for

  • Faulty safari after hard drive replacement - how to fix?

    Just had my imac's hard drive replaced as part of the free faulty hard drive replacement program. but  now safari doesn't work? it just crashes when opened with the following message. help!! Process:     Safari [1211] Path:        /Applications/Safar

  • How do I setup Address Book to view shared contacts on Exchange Server 2010?

    At work we have MS Exchange Server 2010.  I recently switched from Entourage to native OSX Lion apps Mail, iCal and Address Book.  Mail and iCal work fine so far with Exchange (IE. I can add delegates, see their calendars, etc.) but when using Addres

  • Export black screen

    Hi, i have a problem, im trying to export my sequence of almost 2 hrs, i exported 2 mins of it just to check how is it gonna look, the 2 mins export looks great, but! when i export the entire sequence it shows a black screen. i have no audio in my pr

  • Mark a clip as flagged on timeline

    Sometimes when working on a timeline, I come across a clip that needs to be reshot or otherwise modified. Now I could use a marker but markers don't stick to clips and don't have any visible flyouts like in Vegas, making labeling rather pointless. So

  • Rendering intent exporting to AdobeRGB and sRGB

    LR works with the color space ProPhotoRGB. With highly saturated colors this colorspace is larger than AdobeRGB and sRGB. As I see it now is that while exporting to these color spaces, Lightroom uses the method as in Photoshop called Convert to Profi